Scrapy masterclass: Python web scraping and data pipelines

Work on 7 real-world web-scraping projects using Scrapy, Splash, and Selenium. Build data pipelines locally and on AWS

This is the era of data!

What you’ll learn

  • Extract data from the most difficult web sites using Scrapy.
  • Build ETL pipelines and store data in CSV, JSON, MySQL, MongoDB, and S3.
  • Avoid getting banned and evade bot-protection techniques.
  • Use Splash for scraping JavaScript-powered websites.
  • Harness the power of Selenium browser automation to scrape any website.
  • Deploy your Scrapy bots in local and AWS environments.

Course Content

  • Introduction –> 4 lectures • 13min.
  • Xpath first steps –> 5 lectures • 32min.
  • Hello Scrapy –> 8 lectures • 1hr 3min.
  • Scrapy web-scraping scenarios –> 9 lectures • 57min.
  • Data transformation using Scrapy Pipelines –> 4 lectures • 23min.
  • Data loading (storage) using Scrapy’s pipelines –> 6 lectures • 44min.
  • Scrapy Middleware (or how to avoid getting banned) –> 4 lectures • 29min.
  • Handling JavaScript websites using Splash –> 6 lectures • 49min.
  • Browser automation using Selenium and Scrapy –> 4 lectures • 41min.

Scrapy masterclass: Python web scraping and data pipelines

Requirements

This is the era of data!

Everyone is telling you what to do with the data that you already have. But how can you “have” this data?

Most of the Data Engineering / Data Science discussions today focus on how to analyze and process datasets to draw some useful information out of them. However, they all assume that those datasets are already available to you. That they’ve been collected somehow. They spend little time showing how you can obtain this dataset firsthand! This course fills this gap.

Scrapy for building powerful web scraping pipelines is all about walking you through the process of extracting data of interest from websites. True, there are a lot of datasets already available for you to consume either for free or at some cost. However, what if those datasets are outdated? What if they don’t address your specific needs? You’d better know how to build your own dataset from scratch no matter how unstructured your data source was.

Scrapy is a Python web scraping framework. Thousands of companies and professionals use it to collect data and build datasets. Then they can sell them or use them in their own projects. Today, you can be one of those professionals. Even build your own business around data harvesting!

Today, data scientists and data engineers are among the most highly paid in the industry. Yet, if they don’t have enough data to work on, they can do nothing.

In this class, I’ll show you how to obtain, organize, and store unstructured data from within websites’ HTML, CSS, and JavaScript. Having mastered that skill, you can start your data engineering/data science career with an extra skillset under your belt: web scraping.

You will also learn the next steps after you obtain your data. ETL (Extract, Transform, and Load) starts with Scrapy (Extract). But this course covers the other two aspects (Transform and Load). Using Scrapy pipelines, we’ll see how we can store our data to SQL, and NoSQL databases, Elastic Search clusters, event brokers like Kafka, object storage like S3, and message queues like AWS SQS.

Even if you know nothing about web scraping or data harvesting, even if all of this seems new to you, you’ve come to the right place.

I’ve designed this class for total beginners. It will walk you from “What is web scraping? What is Scrapy? Why should I learn and use it?” all the way up to “Now I have several gigabytes of web-scraped data from dozens of websites. Let’s figure out how we can put them to effective use”.

Web scraping can be as easy as extracting some text from some HTML page do going several levels deep among several websites, crawling each link, and hoping from one page to another. It can also get incredibly challenging when websites place blockers to disallow web bots from accessing them. Don’t worry, we’ll address all use-cases and, together, figure out how we can overcome them.