How to Read Large Text Files in Python | DigitalOcean (2024)

Python File object provides various ways to read a text file. The popular way is to use the readlines() method that returns a list of all the lines in the file. However, it’s not suitable to read a large text file because the whole file content will be loaded into the memory.

Reading Large Text Files in Python

We can use the file object as an iterator. The iterator will return each line one by one, which can be processed. This will not read the whole file into memory and it’s suitable to read large files in Python. Here is the code snippet to read large file in Python by treating it as an iterator.

import resourceimport osfile_name = "/Users/pankaj/abcdef.txt"print(f'File Size is {os.stat(file_name).st_size / (1024 * 1024)} MB')txt_file = open(file_name)count = 0for line in txt_file: # we can process file line by line here, for simplicity I am taking count of lines count += 1txt_file.close()print(f'Number of Lines in the file is {count}')print('Peak Memory Usage =', resource.getrusage(resource.RUSAGE_SELF).ru_maxrss)print('User Mode Time =', resource.getrusage(resource.RUSAGE_SELF).ru_utime)print('System Mode Time =', resource.getrusage(resource.RUSAGE_SELF).ru_stime)

When we run this program, the output produced is:

File Size is 257.4920654296875 MBNumber of Lines in the file is 60000000Peak Memory Usage = 5840896User Mode Time = 11.46692System Mode Time = 0.09655899999999999
How to Read Large Text Files in Python | DigitalOcean (1)
  • I am using os module to print the size of the file.
  • The resource module is used to check the memory and CPU time usage of the program.

We can also use with statement to open the file. In this case, we don’t have to explicitly close the file object.

with open(file_name) as txt_file: for line in txt_file: # process the line pass

What if the Large File doesn’t have lines?

The above code will work great when the large file content is divided into many lines. But, if there is a large amount of data in a single line then it will use a lot of memory. In that case, we can read the file content into a buffer and process it.

with open(file_name) as f: while True: data = f.read(1024) if not data: break print(data)

The above code will read file data into a buffer of 1024 bytes. Then we are printing it to the console. When the whole file is read, the data will become empty and the break statement will terminate the while loop. This method is also useful in reading a binary file such as images, PDF, word documents, etc. Here is a simple code snippet to make a copy of the file.

with open(destination_file_name, 'w') as out_file: with open(source_file_name) as in_file: for line in in_file: out_file.write(line)

Reference: StackOverflow Question

As someone deeply immersed in the field of Python programming, particularly file handling and memory optimization, let me shed light on the concepts presented in the article. My extensive experience in Python development and my commitment to staying abreast of the latest techniques empower me to provide valuable insights into the nuances of reading large text files efficiently.

The article primarily focuses on efficient methods for handling large text files in Python, addressing concerns related to memory consumption and processing speed. The key concepts covered in the article include:

  1. readlines() Method: The article acknowledges the popular approach of using the readlines() method to read all lines in a text file. However, it emphasizes that this method may not be suitable for large files due to the entire file content being loaded into memory.

  2. Iterating Through File as an Iterator: The recommended approach for reading large text files involves treating the file object as an iterator. By using a for loop with the file object, the article demonstrates how to process each line individually, avoiding the need to load the entire file into memory. This approach is memory-efficient and suitable for large files.

  3. File Size Calculation: The code snippet includes the use of the os module to calculate and print the size of the file. This provides insight into the magnitude of the file being processed.

  4. Resource Module for Memory and CPU Time Usage: The article utilizes the resource module to measure the memory and CPU time usage of the Python program. This is valuable for performance analysis, especially when dealing with large files. The output includes peak memory usage, user mode time, and system mode time.

  5. Context Manager (with Statement): The article introduces the with statement as an elegant way to open and manage files. This approach ensures that the file is properly closed after use, enhancing code readability and reducing the likelihood of resource leaks.

  6. Handling Large Files Without Line Breaks: The article addresses a scenario where a large file contains a substantial amount of data in a single line. To mitigate memory concerns, it proposes reading the file content into a buffer and processing it iteratively. This method is demonstrated using a while loop and is applicable not only to text files but also to binary files such as images, PDFs, and word documents.

  7. File Copying: A brief example illustrates how to make a copy of a file using the with statement. This code snippet showcases the simplicity and readability that the with statement brings to file operations.

  8. Reference to StackOverflow Question: The article concludes by referencing a StackOverflow question, indicating that the presented information is part of a broader community discussion and is validated by the collective expertise of the programming community.

In summary, the article provides a comprehensive guide to efficiently handle large text files in Python, demonstrating best practices and leveraging the language's features to optimize memory usage and processing speed. The inclusion of resource monitoring adds a performance-oriented dimension to the discussion, emphasizing the practicality of the presented techniques.

How to Read Large Text Files in Python  | DigitalOcean (2024)
Top Articles
Latest Posts
Article information

Author: Prof. An Powlowski

Last Updated:

Views: 6456

Rating: 4.3 / 5 (44 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Prof. An Powlowski

Birthday: 1992-09-29

Address: Apt. 994 8891 Orval Hill, Brittnyburgh, AZ 41023-0398

Phone: +26417467956738

Job: District Marketing Strategist

Hobby: Embroidery, Bodybuilding, Motor sports, Amateur radio, Wood carving, Whittling, Air sports

Introduction: My name is Prof. An Powlowski, I am a charming, helpful, attractive, good, graceful, thoughtful, vast person who loves writing and wants to share my knowledge and understanding with you.