I need a web scraper written for the following url:
[login to view URL]
All information needed is available on the main page. The number of rows will vary. If there is a row without an origin city, skip that row. Data will be listed in blocks with
different contact information for each block, contact information will be located above the block of data.
The output should be a pipe (|) delimited file with the following column mappings:
origin_city --> data located in the "Load Origin" column before the ,
origin_state --> data located in the "Load Origin" column after the ,
ship_date --> data located in the "Date" column, change to the YYYY-MM-DD format,
if the date column is blank use the current days date, also in the YYYY-MM-DD format
destination_city --> data located in the "Destination" column before the ,
destination_state --> data located in the "Destination" column after the ,
receive_date --> leave blank
trailer_type --> data is the abbreviations located in the "Type" column
load_size --> add the text "Full"
weight --> leave blank
length --> leave blank
width --> leave blank
height --> leave blank
trip_miles --> data located in the "Miles" column
pay_rate --> data located in the "Rate" column
contact_phone --> data located in the contact cell above each block of loads (ie: PH (812-823-4212)
contact_name --> data located in the contact cell above each block of loads, the contact name will be listed after the word contact
tarp_required --> leave blank
comment --> data located in the "Quantity/Notes" column
load_number --> leave blank
commodity --> leave blank
The first line of the output should contain all of the column headers.
Any field that contains no data should be left blank.
Please do not use words like "null" or "blank" in blank columns.
Below is a sample output of the first 5 columns using sample data:
The deliverable will be a Perl .pl file that must run on
Ubuntu Linux and must use Modern::Perl. The Perl .pl file
should be called '[login to view URL]' and the output file should be
called '[login to view URL]'
It will be scheduled in cron to run unattended every 15 minutes.
Please specific what language/OS/modules you plan to use.
Also, please include the word "raccoon" in your bid so I know that
you read this description.
I can provide you Perl web scraping program for [login to view URL] in less than a day. I'll use WWW::Mechanize and HTML::TreeBuilder::LibXML to parse HTML.
16 freelancers are bidding on average $163 for this job
Hi, I have done many scrapy projects. I read the description(raccoon). I will use python 2.7 scrapy spider library. I am interested in your projects. Lets talk details and start project.
Hi, I have 2+ years of experience in fullstack with expertise in python. I have previously worked on projects like this and I can deliver this project in time and on your budget.