The lines of input.txt file are passed to your mapper function one by one.
It should download the page of that URL, make all the text lower cased, and generate <term, docid> pairs. (Check Downloader.java as a sample of how to use crawler4j lib to download a single page). Here is an example for WordCount.
Submit a postings.txt file in which each line contains postings of a term: <term, docid1, docid2, ...>. Before uploading this file, truncate it and only keep the first 500 lines.
Submit a report.txt file with this content:
Total number of distinct terms: ....
Number of documents containing term 'is': ...
Number of documents containing term 'Satellite': ...
First Name of member 1 of your team: ...
List of URLs containing first name of member 1 of your team: ...