The first time I faced such a problem. There is a database in a text file 370 Mb.
It has fields field1, field2,field3 and field2 can be repeated up to a hundred times, and field1 and field3 is unique. The task is to pull unique data array $field 2 all records from the database. Ie get 180,000 rows with the merged data, field1, field3. Now in database 3280000 lines. The algorithm I built, but due to the fact that every time they have fully wool base, the sampling occurs at a rate of 3000-4000 lines per day. The problem is that just a sample of the case is complete and every row need to process using PHP. How to speed up?