|
Post by dsummoner on Nov 3, 2013 13:19:29 GMT -5
I am in the process of working on a project (just for the fun of it) for which a web scraping program would be a boon. In short, I have a starting search page that contains over 10K documents in PDF format (the search was generated by keyword search). I would like a program that can download all of the PDF files on the starting search page and then sequentially for each additional search page. I am not averse to having to learn a bit of coding in either Ruby or HTML5 in order to write my own program to achieve such a function but would rather not spend the time and would instead like something 'ready made' for this purpose.
|
|
Bartman
Still Working,,,,,,Dammit!
Posts: 77
|
Post by Bartman on Sept 25, 2015 22:18:29 GMT -5
I downloaded a Site Grabber or something that said it would Rip any Images from a given URL but I never tried it. PDFs I don't know about grabbing. Think it was called Site Ripper or something like that. I'd hunt around on the shareware/freeware sites.
|
|