True Crime Collection: Web Scraping Using Python and Beautiful Soup

Updated on October 12, 2019

Introduction

In the past few years, several crimes have been solved by regular people who have access to the internet. Someone even developed a serial killer detector. Whether you're a fan of true crime stories and just want to do some extra reading or you want to use these crime-related information for your research, this article will help you collect information from your websites of choice.

In this article, I will guide you through collecting information from two different website formats and explain a few blocks of code and how we came up with them. If you need a more detailed tutorial of every component in the code, I highly recommend watching Corey Schafer's Youtube tutorial.

Requirements

Python

I'm using Python 3.6.8 but you can use other versions. Some of the syntax could be different especially for Python 2 versions.

Beautiful Soup

Once you're done installing Python, you can get Beautiful Soup by entering "pip install beautifulsoup4" in your terminal. For more information, visit Crummy.

Step-by-step

We will collect our stories from the following websites:

To do this, we first have to scan through the website's pages and articles so we can tell our program where to look. Each website has a different structure so one source code does not fit all.

We will follow the steps below for each website:

  1. Identify how many crime-related pages the website has.
  2. Write the code to navigate from the first page to the last.
  3. Write the code to scan through stories in every page.

Scraping Bizarrepedia

Step 1: Identify how many crime-related pages the website has

Let's start with Bizarrepedia's crime category.

Go to https://www.bizarrepedia.com/crime and scroll down until you see the button that says "LOAD MORE POSTS".

The main page of the "Crime" category.
The main page of the "Crime" category.

Right-click the "LOAD MORE POSTS" button and click "Inspect Element" to open your browser's Developer Tools window (I'm using Firefox). You will see something like this:

<a href="#" id="load-more-posts" data-uri="https://www.bizarrepedia.com/crime/" data-num-pages="11" data-current-page="1" style="display: block;">Load more posts (108) <i></i></a>

Here, we can see that there are 11 crime-related pages and 108 articles. We are currently on page 1.

The Developer Tools window.
The Developer Tools window.

Close the Developer Tools window and click the "LOAD MORE POSTS" button.

The URL in the address bar will change to "https://www.bizarrepedia.com/crime/#page/2", the button will disappear, and a new list of articles will appear below where the button used to be.

Now, we can see all the articles from page 1 to 2. Go to "https://www.bizarrepedia.com/crime/page/2/" to access only the page 2 articles.

Page 2 articles.
Page 2 articles.

Bizarrepedia's Format

If you scroll down further, you will find the button again and if you click it, the page number in the URL will change to 3, the button will disappear again, a new list of articles will appear and so on. This format is not the same for all websites. Some websites simply direct you to another page instead of loading new articles at the bottom.

We were able to load these articles by interacting with the website's user interface but when we write the code to navigate to different pages, we can't interact with the user interface (There are tools to do that like Selenium but in this example, we will only use Beautiful Soup to keep it simple). We need to find out the exact URL for each page so we can loop through them in our code.

Let's Start with the Code Below

The printed result you will get from the code below is similar to the contents you will see when you go to "https://www.bizarrepedia.com/crime/", right-click on a blank space anywhere on the page, and click "View Page Source" or by typing the text below to your address bar:

view-source:https://www.bizarrepedia.com/crime/

This page is our starting point and we will get all the information we need to navigate through the pages and their articles by starting from this.

import requests
from bs4 import BeautifulSoup


# Retrieve all pages
url = "https://www.bizarrepedia.com/crime/"
response = requests.get(url)
soup = BeautifulSoup(response.text, "lxml")

print(soup.prettify())
View Source.
View Source.
The result after running the Python code.
The result after running the Python code.

Step 2: Write the code to navigate from the first page to the last

Now we can use the line we saw earlier in the browser's Developer Tools window:

<a href="#" id="load-more-posts" data-uri="https://www.bizarrepedia.com/crime/" data-num-pages="11" data-current-page="1" style="display: block;">Load more posts (108) <i></i></a>

We will add a line in the code to get the total number of pages to loop through. We'll find "a" elements that has the attribute "data-num-pages" and get this attribute's value.

import requests
from bs4 import BeautifulSoup


# Retrieve all pages
url = "https://www.bizarrepedia.com/crime/"
response = requests.get(url)
soup = BeautifulSoup(response.text, "lxml")
pages = int(soup.find("a", {"data-num-pages": True})["data-num-pages"])

for page in range(1, pages + 1):
    if page == 1: # First page has no page number
        url = "https://www.bizarrepedia.com/crime"
    else:
        url = "https://www.bizarrepedia.com/crime/page/" + str(page)

Getting the Number of Pages

In line 9, we get the number of pages by first finding an "a" element that has an attribute called "data-num-pages" (True). We retrieve that attribute's value using ["data-num-pages"]. Then we wrap that line in int() so we can use it for the range when we loop through all the pages. This will give us the value 11.

Looping Through the Pages

In line 11, we use a for loop to go through page numbers 1 to 11. We add 1 to the total number of pages since we started our range with 1 instead of the default 0. If we don't do this, the loop will stop at 11 and we will miss the articles at page 11.

Inside the for loop, we added an if condition to use the initial crime page URL for page 1 and append "/page/<page no.>" for pages 2 to 11. This is because there is no "https://www.bizarrepedia.com/crime/page/1".

See?
See?

Step 3: Write the code to scan through stories in every page

As we loop through the pages, we use each page's URL to scan through the stories. Each story has a subject and a main story. The subject is who the criminal is, for example, John Wayne Gacy. The main story is the full story about the criminal. In this website, the full story is separated into different paragraphs. We combine those paragraphs and print out the result.

import requests
from bs4 import BeautifulSoup


# Retrieve all pages
url = "https://www.bizarrepedia.com/crime/"
response = requests.get(url)
soup = BeautifulSoup(response.text, "lxml")
pages = int(soup.find("a", {"data-num-pages": True})["data-num-pages"])

for page in range(1, pages + 1):
    if page == 1: # First page has no page number
        url = "https://www.bizarrepedia.com/crime"
    else:
        url = "https://www.bizarrepedia.com/crime/page/" + str(page)

    # Retrieve each story
    response = requests.get(url)
    soup = BeautifulSoup(response.text, "lxml")
    stories = soup.find_all("a", {"class": "bx"})

    for story in stories:
        response = requests.get(story["href"])
        soup = BeautifulSoup(response.text, "lxml")
        subject = soup.find("h1", {"class": "entry"}).text
        main_story = soup.find("div", {"class": "typography"})
        blocks = main_story.find_all("p")
        full_story = ""

        for block in blocks:
            full_story = full_story + block.text + "\n\n"
        print(subject + "\n\n" + full_story)
        break
    break

In line 18 - 20, we retrieve all "a" elements with a "class" attribute value of "bx". We get this by right-clicking one of the titles in the page and clicking "Inspect Element" like we did before. This will open the browser's Developer Tools and we'll find the link to the story.

The link to the story.
The link to the story.

In line 22 - 27, we loop through each story URL that we found. We get the subject by hovering over the story's title and inspecting the element. We'll find that the title is inside an "h1" element with a "class" value of "entry". We get the text inside the element using .text.

subject = soup.find("h1", {"class": "entry"}).text
Subject: The title of the story.
Subject: The title of the story.

We get the main story by hovering over the story's body and inspecting the element. We'll find that the full story is separated in paragraphs or "p" elements, all of which are inside a "div" element with a "class" value of "typography".

 main_story = soup.find("div", {"class": "typography"})
 blocks = main_story.find_all("p")

We declare a new variable named "full_story", loop through each paragraph, and append each paragraph to the variable's value. Once we get the full story or if there are no more paragraphs to go through, we print the subject and full story.

        full_story = ""
 
        for block in blocks:
            full_story = full_story + block.text + "\n\n"
        print(subject + "\n\n" + full_story)

You will notice that there are breaks in lines 33 and 34. This is used to break two for loops - the stories loop and the pages loop. You can remove these breaks to scrape all the stories in all pages. In this example, I just added them so we can get only one story. Running the code will give you the output below.

The result of scraping one of Bizarrepedia's stories.
The result of scraping one of Bizarrepedia's stories.

Scraping Criminal Minds

Step 1: Identify how many crime-related pages the website has

Let's continue with Criminal Minds' serial killer category.

Go to https://criminalminds.fandom.com/wiki/Real_Criminals/Serial_Killers.

The main page of the "Serial Killers" category.
The main page of the "Serial Killers" category.

Step 2: Write the code to navigate from the first page to the last

The Criminal Minds website's format is a little more straightforward so we can skip this step. All of the stories can be accessed through this one page. If you scroll down further, you will see that all of the subjects are listed in alphabetical order.

Step 3: Write the code to scan through stories in every page

Since we only have one page, we can proceed to scanning through the stories. The rest of the steps are pretty similar to what we did for Bizarrepedia. The Criminal Minds website has an interesting "quote" block for each subject so I decided to include it.

If you follow the steps we did when scraping stories from Bizarrepedia, you should get a code similar to the one below. They don't have to be exactly the same since there are several ways to do things and you can choose which elements you want to start with as long as you get the same result.

import requests
from bs4 import BeautifulSoup


# Retrieve all stories
url = "https://criminalminds.fandom.com/wiki/Real_Criminals/Serial_Killers"
response = requests.get(url)
soup = BeautifulSoup(response.text, "lxml")
stories = soup.find_all("div", {"class": "lightbox-caption"})

for story in stories:
    # Retrieve each story
    url = "https://criminalminds.fandom.com" + story.find("a")["href"]
    response = requests.get(url)
    soup = BeautifulSoup(response.text, "lxml")
    main_story = soup.find("div", {"id":"mw-content-text"})
    quote = " ".join(main_story.find("table").text.split())
    subject = story.find("a")["title"]
    blocks = main_story.find_all("p")
    full_story = ""

    for block in blocks:
        full_story = full_story + block.text + "\n"
    print(quote + "\n" + subject + "\n\n" + full_story)
    break
The result of scraping one of Criminal Minds' stories.
The result of scraping one of Criminal Minds' stories.

Finally

Now that you can collect these information, what do you plan to do with them? Surely, you will not be satisfied with viewing the results like this. In the next article, I will guide you through loading all of these information to Elasticsearch. This will enable us to save the stories, search through them, and extract useful and structured information from them such as activity dates, victims, weapons, etc.

Questions & Answers

    © 2019 Joann Mistica

    Comments

      0 of 8192 characters used
      Post Comment

      No comments yet.

      working

      This website uses cookies

      As a user in the EEA, your approval is needed on a few things. To provide a better website experience, owlcation.com uses cookies (and other similar technologies) and may collect, process, and share personal data. Please choose which areas of our service you consent to our doing so.

      For more information on managing or withdrawing consents and how we handle data, visit our Privacy Policy at: https://owlcation.com/privacy-policy#gdpr

      Show Details
      Necessary
      HubPages Device IDThis is used to identify particular browsers or devices when the access the service, and is used for security reasons.
      LoginThis is necessary to sign in to the HubPages Service.
      Google RecaptchaThis is used to prevent bots and spam. (Privacy Policy)
      AkismetThis is used to detect comment spam. (Privacy Policy)
      HubPages Google AnalyticsThis is used to provide data on traffic to our website, all personally identifyable data is anonymized. (Privacy Policy)
      HubPages Traffic PixelThis is used to collect data on traffic to articles and other pages on our site. Unless you are signed in to a HubPages account, all personally identifiable information is anonymized.
      Amazon Web ServicesThis is a cloud services platform that we used to host our service. (Privacy Policy)
      CloudflareThis is a cloud CDN service that we use to efficiently deliver files required for our service to operate such as javascript, cascading style sheets, images, and videos. (Privacy Policy)
      Google Hosted LibrariesJavascript software libraries such as jQuery are loaded at endpoints on the googleapis.com or gstatic.com domains, for performance and efficiency reasons. (Privacy Policy)
      Features
      Google Custom SearchThis is feature allows you to search the site. (Privacy Policy)
      Google MapsSome articles have Google Maps embedded in them. (Privacy Policy)
      Google ChartsThis is used to display charts and graphs on articles and the author center. (Privacy Policy)
      Google AdSense Host APIThis service allows you to sign up for or associate a Google AdSense account with HubPages, so that you can earn money from ads on your articles. No data is shared unless you engage with this feature. (Privacy Policy)
      Google YouTubeSome articles have YouTube videos embedded in them. (Privacy Policy)
      VimeoSome articles have Vimeo videos embedded in them. (Privacy Policy)
      PaypalThis is used for a registered author who enrolls in the HubPages Earnings program and requests to be paid via PayPal. No data is shared with Paypal unless you engage with this feature. (Privacy Policy)
      Facebook LoginYou can use this to streamline signing up for, or signing in to your Hubpages account. No data is shared with Facebook unless you engage with this feature. (Privacy Policy)
      MavenThis supports the Maven widget and search functionality. (Privacy Policy)
      Marketing
      Google AdSenseThis is an ad network. (Privacy Policy)
      Google DoubleClickGoogle provides ad serving technology and runs an ad network. (Privacy Policy)
      Index ExchangeThis is an ad network. (Privacy Policy)
      SovrnThis is an ad network. (Privacy Policy)
      Facebook AdsThis is an ad network. (Privacy Policy)
      Amazon Unified Ad MarketplaceThis is an ad network. (Privacy Policy)
      AppNexusThis is an ad network. (Privacy Policy)
      OpenxThis is an ad network. (Privacy Policy)
      Rubicon ProjectThis is an ad network. (Privacy Policy)
      TripleLiftThis is an ad network. (Privacy Policy)
      Say MediaWe partner with Say Media to deliver ad campaigns on our sites. (Privacy Policy)
      Remarketing PixelsWe may use remarketing pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to advertise the HubPages Service to people that have visited our sites.
      Conversion Tracking PixelsWe may use conversion tracking pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to identify when an advertisement has successfully resulted in the desired action, such as signing up for the HubPages Service or publishing an article on the HubPages Service.
      Statistics
      Author Google AnalyticsThis is used to provide traffic data and reports to the authors of articles on the HubPages Service. (Privacy Policy)
      ComscoreComScore is a media measurement and analytics company providing marketing data and analytics to enterprises, media and advertising agencies, and publishers. Non-consent will result in ComScore only processing obfuscated personal data. (Privacy Policy)
      Amazon Tracking PixelSome articles display amazon products as part of the Amazon Affiliate program, this pixel provides traffic statistics for those products (Privacy Policy)
      ClickscoThis is a data management platform studying reader behavior (Privacy Policy)