The attention economy of late capitalism demands I spend time clicking on a browser window to do things, rather than automating the world like we thought we were going to have all worked out by now.
I want to get data from some plain public HTML site with minimal pain
Scrapy does what you want.
Also there is a custom cloud service (scrapinghub) that will deploy it for you on a massive scale if you want.
Wait but I have to log in to get my data and I’m too lazy to configure that
Turns out you can automate your local Firefox to do this in an easy, although not scalable way, thanks Ian Bicking
Chromeless, the headless chrome browser, seems to be the hip thing here. And it has various easy cloud-deployment options. (I don’t know how cookies are handled.)
Selenium seems to work do this. But how can one automate its deployment and a bunch of user credentials with some degree of security and yet the absolute minimum of thought or effort? I do not yet know. To be continued. If absolutely necessary. But to be honest, at this point, at this dizzying pinnacle of using billions of dollars to do glorified fake social behavior, I’d really prefer just pick lice out of the pelts of my audience the old fashioned way.
Anyway, here was some stuff I read before deciding it’s a gigantic time-waste unless someone pays you lots of money.
SeLite automates browser navigation and testing. It extends Selenium. It
- improves Selenium (API, syntax and visual interface),
- enables reuse,
- supports reporting and interaction,[…]
SeLite enables DB-driven navigation with SQLite
You might also get some mileage out of the Firefox CLI, mozrepl