fileproc

package
v1.6.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Nov 29, 2025 License: MIT Imports: 4 Imported by: 0

Documentation

Overview

Package fileproc provides concurrent file processing utilities.

Index

Constants

View Source
const DefaultWorkerMultiplier = 2

DefaultWorkerMultiplier is the multiplier applied to NumCPU for worker count. 2x is optimal for mixed I/O and CGO workloads.

Variables

This section is empty.

Functions

func ForEachFile

func ForEachFile[T any](files []string, fn func(string) (T, error)) []T

ForEachFile processes files in parallel, calling fn for each file. No parser is provided; use this for non-AST operations (e.g., SATD scanning). Uses 2x NumCPU workers by default.

func ForEachFileN

func ForEachFileN[T any](files []string, maxWorkers int, fn func(string) (T, error), onProgress ProgressFunc, onError ErrorFunc) []T

ForEachFileN processes files with configurable worker count and callbacks. If maxWorkers is <= 0, defaults to 2x NumCPU.

func ForEachFileWithErrors

func ForEachFileWithErrors[T any](files []string, fn func(string) (T, error), onError ErrorFunc) []T

ForEachFileWithErrors processes files in parallel with error callback.

func ForEachFileWithProgress

func ForEachFileWithProgress[T any](files []string, fn func(string) (T, error), onProgress ProgressFunc) []T

ForEachFileWithProgress processes files in parallel with optional progress callback.

func ForEachFileWithResource

func ForEachFileWithResource[T any, R any](
	files []string,
	initResource func() (R, error),
	closeResource func(R),
	fn func(R, string) (T, error),
	onProgress ProgressFunc,
) []T

ForEachFileWithResource processes files in parallel, calling fn for each file with a per-worker resource. The initResource function is called once per worker to create the resource (e.g., git repo handle). The closeResource function is called when the worker is done to release the resource. Uses 2x NumCPU workers by default.

func MapFiles

func MapFiles[T any](files []string, fn func(*parser.Parser, string) (T, error)) []T

MapFiles processes files in parallel, calling fn for each file with a dedicated parser. Results are collected and returned in arbitrary order. Errors from individual files are silently skipped; use MapFilesWithErrors for error handling. Uses 2x NumCPU workers by default (optimal for mixed I/O and CGO workloads).

func MapFilesN

func MapFilesN[T any](files []string, maxWorkers int, fn func(*parser.Parser, string) (T, error), onProgress ProgressFunc, onError ErrorFunc) []T

MapFilesN processes files with configurable worker count and callbacks. If maxWorkers is <= 0, defaults to 2x NumCPU.

func MapFilesWithErrors

func MapFilesWithErrors[T any](files []string, fn func(*parser.Parser, string) (T, error), onError ErrorFunc) []T

MapFilesWithErrors processes files in parallel with error callback. The onError callback is invoked for each file that fails processing.

func MapFilesWithProgress

func MapFilesWithProgress[T any](files []string, fn func(*parser.Parser, string) (T, error), onProgress ProgressFunc) []T

MapFilesWithProgress processes files in parallel with optional progress callback.

Types

type ErrorFunc

type ErrorFunc func(path string, err error)

ErrorFunc is called when a file processing error occurs. Receives the file path and the error. If nil, errors are silently skipped.

type ProgressFunc

type ProgressFunc func()

ProgressFunc is called after each file is processed.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL