Skip to Main Content
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.
| University Libraries
See Updates and FAQs for the latest library services updates. Subject Librarians are available for online appointments, and Virtual Reference has extended hours.

Working with Data

What you need to know for Data Management and Data Wrangling

Overview

If you come across a website displaying data you would like to use:

  1. Does the website allow you to download the data in a format like XML or CSV?
    • If not, can you find the data you need elsewhere (Google, Governments, International Organizations, Library, etc.)?
  2. Does the website offer an API? This may not be immediately obvious and might require a bit of research.
  3. Is the website otherwise trying to provide data openly? If so, email them to inquire about additional options. 
  4. If neither of those options are available, you might consider web scraping, but check that it is not prohibited. 

Both APIs and web scraping have 2 parts:

  1. Make the request: specify a URL (yes, a normal URL)
    • For web-scraping, it is the same url you use in a web browser (because it “returns” an HTML file) – Easier
    • For APIs, the URL would point to their API processor and have keys and values specifying what you want – Harder
  2. Process the response: save the file you get and extract the data
    • For web-scraping, you receive an HTML file with the web content which needs to be parsed to extract the data – Harder
    • For APIs, you receive a file in a different format (often XML or JSON), which gives clean easy-to-access data – Easier

Identify the Tools you need:

  1. If you just need links, images, or non-HTML content, consider whether a browser plugin like DownThemAll! would be sufficient.
  2. If you just need to extract content within specific webpages, you would need a parser--unless you are working with a small number of pages and can use manual tools like Scraper (see #1). 
  3. If you need to follow links to get additional pages, you would need to use a spider--unless the links are knowable ahead of time, as in many paginated tables (see #2). 
  4. If the website uses JavaScript/AJAX to display content you need, you would need a web driver--unless you can identify the underlying API call using browser developer tools.(see #3). 

The Basics

Go through these tutorials before we meet up.

Learn More

APIs

Web Scraping