Multiprocessing Python Example For Loop

Here’s an example of using the multiprocessing module in Python to parallelize a for loop:

import multiprocessing

def process_item(item):
    # Replace this with the actual processing logic for each item
    result = item * item
    return result

def main():
    # Number of CPU cores to use for multiprocessing
    num_cores = multiprocessing.cpu_count()

    # Sample data (you can replace this with your own list of items)
    data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

    # Create a multiprocessing Pool with the number of cores
    pool = multiprocessing.Pool(processes=num_cores)

    # Parallelize the for loop using the map function of the Pool
    results = pool.map(process_item, data)

    # Close the pool to free resources
    pool.close()
    pool.join()

    # Print the results
    print("Original data:", data)
    print("Processed data:", results)

if __name__ == "__main__":
    main()
Code language: Python (python)

In this example, we have a function process_item(item) that takes an item as input and performs some processing on it. In the main() function, we create a multiprocessing.Pool with the number of CPU cores available on the system. Then, we use the pool.map() function to parallelize the process_item() function over the data list.

Note that using pool.map() automatically splits the data into chunks and assigns each chunk to a separate process, allowing for parallel execution on multiple CPU cores.

When you run this code, you’ll see that the processing of items in the for loop is distributed across multiple cores, which can significantly speed up the overall execution time, especially when dealing with computationally intensive tasks.

Python parallel for loop multiprocessing example

Here’s an example of using multiprocessing in Python to parallelize a for loop

import multiprocessing

def process_item(item):
    # Replace this with the actual processing logic for each item
    result = item * item
    return result

def parallel_for_loop(data, num_cores):
    # Create a multiprocessing Pool with the given number of cores
    pool = multiprocessing.Pool(processes=num_cores)

    # Parallelize the for loop using the map function of the Pool
    results = pool.map(process_item, data)

    # Close the pool to free resources
    pool.close()
    pool.join()

    return results

def main():
    # Number of CPU cores to use for multiprocessing
    num_cores = multiprocessing.cpu_count()

    # Sample data (you can replace this with your own list of items)
    data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

    # Perform the parallel for loop
    results = parallel_for_loop(data, num_cores)

    # Print the results
    print("Original data:", data)
    print("Processed data:", results)

if __name__ == "__main__":
    main()
Code language: Python (python)

In this example, we have the same process_item(item) function, which represents the processing logic for each item in the for loop.

The parallel_for_loop(data, num_cores) function takes the data list and the number of CPU cores as arguments and performs the parallelization using multiprocessing.Pool. It then returns the results of the parallel processing.

The main() function sets the number of cores to use based on the available CPU cores and calls the parallel_for_loop() function with the sample data list.

When you run this code, it will perform the processing of items in the for loop in parallel, distributing the workload across multiple CPU cores. This can lead to significant speedup for computationally intensive tasks and large datasets.

Read More;

  • Yaryna Ostapchuk

    I am an enthusiastic learner and aspiring Python developer with expertise in Django and Flask. I pursued my education at Ivan Franko Lviv University, specializing in the Faculty of Physics. My skills encompass Python programming, backend development, and working with databases. I am well-versed in various computer software, including Ubuntu, Linux, MaximDL, LabView, C/C++, and Python, among others.

    View all posts

Leave a Comment