Hajba | Website Scraping with Python | E-Book | www2.sack.de
E-Book

E-Book, Englisch, 235 Seiten

Hajba Website Scraping with Python

Using BeautifulSoup and Scrapy
1. ed
ISBN: 978-1-4842-3925-4
Verlag: Apress
Format: PDF
Kopierschutz: 1 - PDF Watermark

Using BeautifulSoup and Scrapy

E-Book, Englisch, 235 Seiten

ISBN: 978-1-4842-3925-4
Verlag: Apress
Format: PDF
Kopierschutz: 1 - PDF Watermark



Closely examine website scraping and data processing: the technique of extracting data from websites in a format suitable for further analysis. You'll review which tools to use, and compare their features and efficiency. Focusing on BeautifulSoup4 and Scrapy, this concise, focused book highlights common problems and suggests solutions that readers can implement on their own.
Website Scraping with Python starts by introducing and installing the scraping tools and explaining the features of the full application that readers will build throughout the book. You'll see how to use BeautifulSoup4 and Scrapy individually or together to achieve the desired results. Because many sites use JavaScript, you'll also employ Selenium with a browser emulator to render these sites and make them ready for scraping.
By the end of this book, you'll have a complete scraping application to use and rewrite to suit your needs. As a bonus, the author shows you options of how to deploy your spiders into the Cloud to leverage your computer from long-running scraping tasks.
What You'll LearnInstall and implement scraping tools individually and together
Run spiders to crawl websites for data from the cloud
Work with emulators and drivers to extract data from scripted sites

Who This Book Is For
Readers with some previous Python and software development experience, and an interest in website scraping.

Gabor Laszlo Hajba is an IT Consultant who specializes in Java and Python, and holds workshops about Java and Java Enterprise Edition. As the CEO of the JaPy Szoftver Kft in Sopron, Hungary he is responsible for designing and developing customer needs in the enterprise software world. He has also held roles as a software developer with EBCONT Enterprise Technologies, and as an Advanced Software Engineer with Zuhlke Group. He considers himself  a workaholic, (hard)core and well-grounded developer, functional minded, freak of portable apps and 'a champion Javavore who loves pushing code' and loves to develop in Python.

Hajba Website Scraping with Python jetzt bestellen!

Autoren/Hrsg.


Weitere Infos & Material


1;Table of Contents;5
2;About the Author;11
3;About the Technical Reviewer;12
4;Acknowledgments;13
5;Introduction;14
6;Chapter 1: Getting Started;16
6.1;Website Scraping;16
6.1.1;Projects for Website Scraping;17
6.1.2;Websites Are the Bottleneck;18
6.2;Tools in This Book;18
6.3;Preparation;19
6.3.1;Terms and Robots;20
6.3.1.1;robots.txt;21
6.3.2;Technology of the Website;22
6.3.3;Using Chrome Developer Tools;23
6.3.3.1;Set-up;24
6.3.4;Tool Considerations;27
6.4;Starting to Code;28
6.4.1;Parsing robots.txt;28
6.4.2;Creating a Link Extractor;30
6.4.3;Extracting Images;32
6.5;Summary;33
7;Chapter 2: Enter the  Requirements;34
7.1;The Requirements;35
7.2;Preparation;36
7.2.1;Navigating Through “Meat & fishFish”;38
7.2.1.1;Selecting the Required Information;43
7.3;Outlining the Application;46
7.4;Navigating the Website;47
7.4.1;Creating the Navigation;48
7.4.2;The requests Library;51
7.4.2.1;Installation;51
7.4.2.2;Getting Pages;51
7.4.3;Switching to requests;52
7.4.4;Putting the Code Together;53
7.5;Summary;54
8;Chapter 3: Using Beautiful Soup;55
8.1;Installing Beautiful Soup;55
8.2;Simple Examples;56
8.2.1;Parsing HTML Text;56
8.2.2;Parsing Remote HTML;58
8.2.3;Parsing a File;59
8.2.4;Difference Between find and find_all;59
8.2.5;Extracting All Links;59
8.2.6;Extracting All Images;60
8.2.7;Finding Tags Through Their Attributes;60
8.2.8;Finding Multiple Tags Based on Property;61
8.2.9;Changing Content;62
8.2.9.1;Adding Tags and Attributes;63
8.2.9.2;Changing Tags and Attributes;64
8.2.9.3;Deleting Tags and Attributes;65
8.2.10;Finding Comments;66
8.2.11;Conver ting a Soup to HTML Text;67
8.3;Extracting the Required Information;67
8.3.1;Identifying, Extracting, and Calling the Target URLs;68
8.3.2;Navigating the Product Pages;70
8.3.3;Extracting the Information;72
8.3.3.1;Using Dictionaries;72
8.3.3.2;Using Classes;76
8.3.4;Unforeseen Changes;77
8.4;Exporting the Data;79
8.4.1;To CSV;80
8.4.1.1;Quick Glance at the csv Module;80
8.4.1.1.1;Line Endings;82
8.4.1.1.2;Headers;82
8.4.1.2;Saving a Dictionary;83
8.4.1.3;Saving a Class;84
8.4.2;To JSON;87
8.4.2.1;Quick Glance at the json module;87
8.4.2.2;Saving a Dictionary;88
8.4.2.3;Saving a Class;89
8.4.3;To a Relational Database;90
8.4.4;To an NoSQL Database;97
8.4.4.1;Installing MongoDB;97
8.4.4.2;Writing to MongoDB;98
8.5;Per formance Improvements;99
8.5.1;Changing the Parser;100
8.5.2;Parse Only What’s Needed;101
8.5.3;Saving While Working;102
8.6;Developing on a Long Run;104
8.6.1;Caching Intermediate Step Results;104
8.6.2;Caching Whole Websites;105
8.6.2.1;File-Based Cache;106
8.6.2.2;Database Cache;106
8.6.2.3;Saving Space;107
8.6.2.4;Updating the Cache;108
8.7;Source Code for this Chapter;109
8.8;Summary;109
9;Chapter 4: Using Scrapy;111
9.1;Installing Scrapy;112
9.2;Creating the Project;112
9.3;Configuring the Project;114
9.4;Terminology;116
9.4.1;Middleware;116
9.4.2;Pipeline;117
9.4.3;Extension;118
9.4.4;Selectors;118
9.5;Implementing the Sainsbury Scraper;120
9.5.1;What’s This allowed_domains About?;121
9.5.2;Preparation;122
9.5.2.1;Using the Shell;122
9.5.3;def parse(self, response);124
9.5.4;Navigating Through Categories;126
9.5.5;Navigating Through the Product Listings;130
9.5.6;Extracting the Data;132
9.5.7;Where to Put the Data?;137
9.5.7.1;Why Items?;141
9.5.8;Running the Spider;141
9.6;Exporting the Results;147
9.6.1;To CSV;148
9.6.2;To JSON;149
9.6.3;To Databases;151
9.6.3.1;MongoDB;152
9.6.3.2;SQLite;154
9.6.4;Bring Your Own Exporter;157
9.6.4.1;Filtering Duplicates;158
9.6.4.2;Silently Dropping Items;159
9.6.4.3;Fixing the CSV File;161
9.6.4.4;CSV Item Exporter;164
9.7;Caching with Scrapy;167
9.7.1;Storage Solutions;168
9.7.1.1;File System Storage;169
9.7.1.2;DBM Storage;169
9.7.1.3;LevelDB Storage;170
9.7.2;Cache Policies;170
9.7.2.1;Dummy Policy;170
9.7.2.2;RFC2616 Policy;171
9.8;Downloading Images;172
9.9;Using Beautiful Soup with Scrapy;175
9.10;Logging;176
9.11;(A Bit) Advanced Configuration;176
9.11.1;LOG_LEVEL;177
9.11.2;CONCURRENT_REQUESTS;178
9.11.3;DOWNLOAD_DELAY;178
9.11.4;Autothrottling;179
9.11.5;COOKIES_ENABLED;180
9.12;Summary;181
10;Chapter 5: Handling JavaScript;182
10.1;Reverse Engineering;182
10.1.1;Thoughts on Reverse Engineering;185
10.1.2;Summary;185
10.2;Splash;185
10.2.1;Set-up;186
10.2.2;A Dynamic Example;189
10.2.3;Integration with Scrapy;190
10.2.4;Adapting the basic Spider;192
10.2.5;What Happens When Splash Isn’t Running?;196
10.2.6;Summary;196
10.3;Selenium;196
10.3.1;Prerequisites;197
10.3.2;Basic Usage;198
10.3.3;Integration with Scrapy;199
10.3.3.1;scrapy-selenium;200
10.3.4;Summary;202
10.4;Solutions for Beautiful Soup;202
10.4.1;Splash;203
10.4.2;Selenium;204
10.4.3;Summary;205
10.5;Summary;205
11;Chapter 6: Website Scraping in the Cloud;206
11.1;Scrapy Cloud;206
11.1.1;Creating a Project;207
11.1.2;Deploying Your Spider;208
11.1.3;Start and Wait;209
11.1.4;Accessing the Data;211
11.1.5;API;213
11.1.6;Limitations;215
11.1.7;Summary;216
11.2;PythonAnywhere;216
11.2.1;The Example Script;216
11.2.2;PythonAnywhere Configuration;217
11.2.3;Uploading the Script;217
11.2.4;Running the Script;219
11.2.5;This Works Just Manually…;220
11.2.6;Storing Data in a Database?;223
11.2.7;Summary;227
11.3;What About Beautiful Soup?;227
11.4;Summary;229
12;Index;231



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.