Updated date:

True Crime Collection: Web Scraping Using Python and Beautiful Soup

Joann has worked as a developer in the Analytics and Artificial Intelligence industry and is experienced in data scraping, mining, etc.

Introduction

In the past few years, several crimes have been solved by regular people who have access to the internet. Someone even developed a serial killer detector. Whether you're a fan of true crime stories and just want to do some extra reading or you want to use these crime-related information for your research, this article will help you collect information from your websites of choice.

In this article, I will guide you through collecting information from two different website formats and explain a few blocks of code and how we came up with them. If you need a more detailed tutorial of every component in the code, I highly recommend watching Corey Schafer's Youtube tutorial.

Requirements

Python

I'm using Python 3.6.8 but you can use other versions. Some of the syntax could be different especially for Python 2 versions.

Beautiful Soup

Once you're done installing Python, you can get Beautiful Soup by entering "pip install beautifulsoup4" in your terminal. For more information, visit Crummy.

Step-by-step

We will collect our stories from the following websites:

To do this, we first have to scan through the website's pages and articles so we can tell our program where to look. Each website has a different structure so one source code does not fit all.

We will follow the steps below for each website:

  1. Identify how many crime-related pages the website has.
  2. Write the code to navigate from the first page to the last.
  3. Write the code to scan through stories in every page.

Scraping Bizarrepedia

Let's start with Bizarrepedia's crime category.

Go to https://www.bizarrepedia.com/crime and scroll down until you see the button that says "LOAD MORE POSTS".

The main page of the "Crime" category.

The main page of the "Crime" category.

true-crime-collection-web-scraping-using-python-and-beautiful-soup

Right-click the "LOAD MORE POSTS" button and click "Inspect Element" to open your browser's Developer Tools window (I'm using Firefox). You will see something like this:

<a href="#" id="load-more-posts" data-uri="https://www.bizarrepedia.com/crime/" data-num-pages="11" data-current-page="1" style="display: block;">Load more posts (108) <i></i></a>

Here, we can see that there are 11 crime-related pages and 108 articles. We are currently on page 1.

true-crime-collection-web-scraping-using-python-and-beautiful-soup
The Developer Tools window.

The Developer Tools window.

Close the Developer Tools window and click the "LOAD MORE POSTS" button.

The URL in the address bar will change to "https://www.bizarrepedia.com/crime/#page/2", the button will disappear, and a new list of articles will appear below where the button used to be.

true-crime-collection-web-scraping-using-python-and-beautiful-soup

Now, we can see all the articles from page 1 to 2. Go to "https://www.bizarrepedia.com/crime/page/2/" to access only the page 2 articles.

Page 2 articles.

Page 2 articles.

Bizarrepedia's Format

If you scroll down further, you will find the button again and if you click it, the page number in the URL will change to 3, the button will disappear again, a new list of articles will appear and so on. This format is not the same for all websites. Some websites simply direct you to another page instead of loading new articles at the bottom.

We were able to load these articles by interacting with the website's user interface but when we write the code to navigate to different pages, we can't interact with the user interface (There are tools to do that like Selenium but in this example, we will only use Beautiful Soup to keep it simple). We need to find out the exact URL for each page so we can loop through them in our code.

Let's Start with the Code Below

The printed result you will get from the code below is similar to the contents you will see when you go to "https://www.bizarrepedia.com/crime/", right-click on a blank space anywhere on the page, and click "View Page Source" or by typing the text below to your address bar:

view-source:https://www.bizarrepedia.com/crime/

This page is our starting point and we will get all the information we need to navigate through the pages and their articles by starting from this.

bizarrepedia.py

import requests
from bs4 import BeautifulSoup


# Retrieve all pages
url = "https://www.bizarrepedia.com/crime/"
response = requests.get(url)
soup = BeautifulSoup(response.text, "lxml")

print(soup.prettify())
View Source.

View Source.

The result after running the Python code.

The result after running the Python code.

Step 2: Write the code to navigate from the first page to the last

Now we can use the line we saw earlier in the browser's Developer Tools window:

<a href="#" id="load-more-posts" data-uri="https://www.bizarrepedia.com/crime/" data-num-pages="11" data-current-page="1" style="display: block;">Load more posts (108) <i></i></a>

We will add a line in the code to get the total number of pages to loop through. We'll find "a" elements that has the attribute "data-num-pages" and get this attribute's value.

bizarrepedia.py

import requests
from bs4 import BeautifulSoup


# Retrieve all pages
url = "https://www.bizarrepedia.com/crime/"
response = requests.get(url)
soup = BeautifulSoup(response.text, "lxml")
pages = int(soup.find("a", {"data-num-pages": True})["data-num-pages"])

for page in range(1, pages + 1):
    if page == 1: # First page has no page number
        url = "https://www.bizarrepedia.com/crime"
    else:
        url = "https://www.bizarrepedia.com/crime/page/" + str(page)

Getting the Number of Pages

In line 9, we get the number of pages by first finding an "a" element that has an attribute called "data-num-pages" (True). We retrieve that attribute's value using ["data-num-pages"]. Then we wrap that line in int() so we can use it for the range when we loop through all the pages. This will give us the value 11.

Looping Through the Pages

In line 11, we use a for loop to go through page numbers 1 to 11. We add 1 to the total number of pages since we started our range with 1 instead of the default 0. If we don't do this, the loop will stop at 11 and we will miss the articles at page 11.

Inside the for loop, we added an if condition to use the initial crime page URL for page 1 and append "/page/<page no.>" for pages 2 to 11. This is because there is no "https://www.bizarrepedia.com/crime/page/1".

See?

See?

Step 3: Write the code to scan through stories in every page

As we loop through the pages, we use each page's URL to scan through the stories. Each story has a subject and a main story. The subject is who the criminal is, for example, John Wayne Gacy. The main story is the full story about the criminal. In this website, the full story is separated into different paragraphs. We combine those paragraphs and print out the result.

bizarrepedia.py

import requests
from bs4 import BeautifulSoup


# Retrieve all pages
url = "https://www.bizarrepedia.com/crime/"
response = requests.get(url)
soup = BeautifulSoup(response.text, "lxml")
pages = int(soup.find("a", {"data-num-pages": True})["data-num-pages"])

for page in range(1, pages + 1):
    if page == 1: # First page has no page number
        url = "https://www.bizarrepedia.com/crime"
    else:
        url = "https://www.bizarrepedia.com/crime/page/" + str(page)

    # Retrieve each story
    response = requests.get(url)
    soup = BeautifulSoup(response.text, "lxml")
    stories = soup.find_all("a", {"class": "bx"})

    for story in stories:
        response = requests.get(story["href"])
        soup = BeautifulSoup(response.text, "lxml")
        subject = soup.find("h1", {"class": "entry"}).text
        main_story = soup.find("div", {"class": "typography"})
        blocks = main_story.find_all("p")
        full_story = ""

        for block in blocks:
            full_story = full_story + block.text + "\n\n"
        print(subject + "\n\n" + full_story)
        break
    break

In line 18 - 20, we retrieve all "a" elements with a "class" attribute value of "bx". We get this by right-clicking one of the titles in the page and clicking "Inspect Element" like we did before. This will open the browser's Developer Tools and we'll find the link to the story.

true-crime-collection-web-scraping-using-python-and-beautiful-soup
The link to the story.

The link to the story.

In line 22 - 27, we loop through each story URL that we found. We get the subject by hovering over the story's title and inspecting the element. We'll find that the title is inside an "h1" element with a "class" value of "entry". We get the text inside the element using .text.

subject = soup.find("h1", {"class": "entry"}).text
true-crime-collection-web-scraping-using-python-and-beautiful-soup
Subject: The title of the story.

Subject: The title of the story.

We get the main story by hovering over the story's body and inspecting the element. We'll find that the full story is separated in paragraphs or "p" elements, all of which are inside a "div" element with a "class" value of "typography".

 main_story = soup.find("div", {"class": "typography"})
 blocks = main_story.find_all("p")
true-crime-collection-web-scraping-using-python-and-beautiful-soup
true-crime-collection-web-scraping-using-python-and-beautiful-soup

We declare a new variable named "full_story", loop through each paragraph, and append each paragraph to the variable's value. Once we get the full story or if there are no more paragraphs to go through, we print the subject and full story.

        full_story = ""
 
        for block in blocks:
            full_story = full_story + block.text + "\n\n"
        print(subject + "\n\n" + full_story)

You will notice that there are breaks in lines 33 and 34. This is used to break two for loops - the stories loop and the pages loop. You can remove these breaks to scrape all the stories in all pages. In this example, I just added them so we can get only one story. Running the code will give you the output below.

The result of scraping one of Bizarrepedia's stories.

The result of scraping one of Bizarrepedia's stories.

Scraping Criminal Minds

Let's continue with Criminal Minds' serial killer category.

Go to https://criminalminds.fandom.com/wiki/Real_Criminals/Serial_Killers.

The main page of the "Serial Killers" category.

The main page of the "Serial Killers" category.

Step 2: Write the code to navigate from the first page to the last

The Criminal Minds website's format is a little more straightforward so we can skip this step. All of the stories can be accessed through this one page. If you scroll down further, you will see that all of the subjects are listed in alphabetical order.

true-crime-collection-web-scraping-using-python-and-beautiful-soup

Step 3: Write the code to scan through stories in every page

Since we only have one page, we can proceed to scanning through the stories. The rest of the steps are pretty similar to what we did for Bizarrepedia. The Criminal Minds website has an interesting "quote" block for each subject so I decided to include it.

true-crime-collection-web-scraping-using-python-and-beautiful-soup

If you follow the steps we did when scraping stories from Bizarrepedia, you should get a code similar to the one below. They don't have to be exactly the same since there are several ways to do things and you can choose which elements you want to start with as long as you get the same result.

criminal_minds.py

import requests
from bs4 import BeautifulSoup


# Retrieve all stories
url = "https://criminalminds.fandom.com/wiki/Real_Criminals/Serial_Killers"
response = requests.get(url)
soup = BeautifulSoup(response.text, "lxml")
stories = soup.find_all("div", {"class": "lightbox-caption"})

for story in stories:
    # Retrieve each story
    url = "https://criminalminds.fandom.com" + story.find("a")["href"]
    response = requests.get(url)
    soup = BeautifulSoup(response.text, "lxml")
    main_story = soup.find("div", {"id":"mw-content-text"})
    quote = " ".join(main_story.find("table").text.split())
    subject = story.find("a")["title"]
    blocks = main_story.find_all("p")
    full_story = ""

    for block in blocks:
        full_story = full_story + block.text + "\n"
    print(quote + "\n" + subject + "\n\n" + full_story)
    break
The result of scraping one of Criminal Minds' stories.

The result of scraping one of Criminal Minds' stories.

Finally

Now that you can collect these information, what do you plan to do with them? Surely, you will not be satisfied with viewing the results like this. In the next article, I will guide you through loading all of these information to Elasticsearch. This will enable us to save the stories, search through them, and extract useful and structured information from them such as activity dates, victims, weapons, etc.

© 2019 Joann Mistica

Related Articles