Documentation
¶
Overview ¶
Package fileproc provides concurrent file processing utilities.
Index ¶
- Constants
- func ForEachFile[T any](files []string, fn func(string) (T, error)) []T
- func ForEachFileN[T any](files []string, maxWorkers int, fn func(string) (T, error), ...) []T
- func ForEachFileWithErrors[T any](files []string, fn func(string) (T, error), onError ErrorFunc) []T
- func ForEachFileWithProgress[T any](files []string, fn func(string) (T, error), onProgress ProgressFunc) []T
- func ForEachFileWithResource[T any, R any](files []string, initResource func() (R, error), closeResource func(R), ...) []T
- func MapFiles[T any](files []string, fn func(*parser.Parser, string) (T, error)) []T
- func MapFilesN[T any](files []string, maxWorkers int, fn func(*parser.Parser, string) (T, error), ...) []T
- func MapFilesWithErrors[T any](files []string, fn func(*parser.Parser, string) (T, error), onError ErrorFunc) []T
- func MapFilesWithProgress[T any](files []string, fn func(*parser.Parser, string) (T, error), ...) []T
- type ErrorFunc
- type ProgressFunc
Constants ¶
const DefaultWorkerMultiplier = 2
DefaultWorkerMultiplier is the multiplier applied to NumCPU for worker count. 2x is optimal for mixed I/O and CGO workloads.
Variables ¶
This section is empty.
Functions ¶
func ForEachFile ¶
ForEachFile processes files in parallel, calling fn for each file. No parser is provided; use this for non-AST operations (e.g., SATD scanning). Uses 2x NumCPU workers by default.
func ForEachFileN ¶
func ForEachFileN[T any](files []string, maxWorkers int, fn func(string) (T, error), onProgress ProgressFunc, onError ErrorFunc) []T
ForEachFileN processes files with configurable worker count and callbacks. If maxWorkers is <= 0, defaults to 2x NumCPU.
func ForEachFileWithErrors ¶
func ForEachFileWithErrors[T any](files []string, fn func(string) (T, error), onError ErrorFunc) []T
ForEachFileWithErrors processes files in parallel with error callback.
func ForEachFileWithProgress ¶
func ForEachFileWithProgress[T any](files []string, fn func(string) (T, error), onProgress ProgressFunc) []T
ForEachFileWithProgress processes files in parallel with optional progress callback.
func ForEachFileWithResource ¶
func ForEachFileWithResource[T any, R any]( files []string, initResource func() (R, error), closeResource func(R), fn func(R, string) (T, error), onProgress ProgressFunc, ) []T
ForEachFileWithResource processes files in parallel, calling fn for each file with a per-worker resource. The initResource function is called once per worker to create the resource (e.g., git repo handle). The closeResource function is called when the worker is done to release the resource. Uses 2x NumCPU workers by default.
func MapFiles ¶
MapFiles processes files in parallel, calling fn for each file with a dedicated parser. Results are collected and returned in arbitrary order. Errors from individual files are silently skipped; use MapFilesWithErrors for error handling. Uses 2x NumCPU workers by default (optimal for mixed I/O and CGO workloads).
func MapFilesN ¶
func MapFilesN[T any](files []string, maxWorkers int, fn func(*parser.Parser, string) (T, error), onProgress ProgressFunc, onError ErrorFunc) []T
MapFilesN processes files with configurable worker count and callbacks. If maxWorkers is <= 0, defaults to 2x NumCPU.
func MapFilesWithErrors ¶
func MapFilesWithErrors[T any](files []string, fn func(*parser.Parser, string) (T, error), onError ErrorFunc) []T
MapFilesWithErrors processes files in parallel with error callback. The onError callback is invoked for each file that fails processing.
func MapFilesWithProgress ¶
func MapFilesWithProgress[T any](files []string, fn func(*parser.Parser, string) (T, error), onProgress ProgressFunc) []T
MapFilesWithProgress processes files in parallel with optional progress callback.