Documentation
¶
Overview ¶
Package pdf implements reading of PDF files.
Overview ¶
PDF is Adobe's Portable Document Format, ubiquitous on the internet. A PDF document is a complex data format built on a fairly simple structure. This package exposes the simple structure along with some wrappers to extract basic information. If more complex information is needed, it is possible to extract that information by interpreting the structure exposed by this package.
Specifically, a PDF is a data structure built from Values, each of which has one of the following Kinds:
Null, for the null object. Integer, for an integer. Real, for a floating-point number. Bool, for a boolean value. Name, for a name constant (as in /Helvetica). String, for a string constant. Dict, for a dictionary of name-value pairs. Array, for an array of values. Stream, for an opaque data stream and associated header dictionary.
The accessors on Value—Int64, Float64, Bool, Name, and so on—return a view of the data as the given type. When there is no appropriate view, the accessor returns a zero result. For example, the Name accessor returns the empty string if called on a Value v for which v.Kind() != Name. Returning zero values this way, especially from the Dict and Array accessors, which themselves return Values, makes it possible to traverse a PDF quickly without writing any error checking. On the other hand, it means that mistakes can go unreported.
The basic structure of the PDF file is exposed as the graph of Values.
Most richer data structures in a PDF file are dictionaries with specific interpretations of the name-value pairs. The Font and Page wrappers make the interpretation of a specific Value as the corresponding type easier. They are only helpers, though: they are implemented only in terms of the Value API and could be moved outside the package. Equally important, traversal of other PDF data structures can be implemented in other packages as needed.
Example (ZeroCopyInPDFProcessing) ¶
Demonstrate how to use zero-copy optimization in actual PDF processing
// Assume some text blocks are extracted from PDF
texts := []string{
" First paragraph ",
" Second paragraph ",
" Third paragraph ",
}
// Process using zero-copy operations
builder := NewStringBuffer(1024)
for i, text := range texts {
// Remove leading and trailing spaces (zero-copy)
trimmed := TrimSpaceZeroCopy(text)
builder.WriteString(trimmed)
if i < len(texts)-1 {
builder.WriteString("\n")
}
}
result := builder.StringCopy()
fmt.Println(result)
Output: First paragraph Second paragraph Third paragraph
Index ¶
- Variables
- func AutoWarmup() error
- func BatchCompareFloat64(a, b []float64, threshold float64) []bool
- func BatchHexDecode(hexStrings []string) ([][]byte, []error)
- func BenchmarkSortingAlgorithms(texts []Text, getCoord func(Text) float64) map[string]float64
- func BytesToString(b []byte) string
- func ClearGlobalStringPool()
- func CompareStringsZeroCopy(s1, s2 string) int
- func EstimateCapacity(currentLen int, growthFactor float64) int
- func ExampleOptimizations()
- func FastHexValidation(hexStr string) bool
- func FastSortTexts(texts []Text, less func(i, j int) bool)
- func FastSortTextsByX(texts []Text)
- func FastSortTextsByY(texts []Text)
- func FastStringConcat(strings ...string) string
- func FastStringConcatZC(parts ...string) string
- func FastStringSearch(haystack, needle string) int
- func GetBuilder() *strings.Builder
- func GetByteBuffer() *[]byte
- func GetPDFBuffer() *buffer
- func GetSizedBuffer(size int) []byte
- func HasPrefixZeroCopy(s, prefix string) bool
- func HasSuffixZeroCopy(s, suffix string) bool
- func HexDecodeSIMD(hexStr string) ([]byte, error)
- func HilbertXYToIndex(x, y, order uint32) uint64
- func InternString(s string) string
- func Interpret(strm Value, do func(stk *Stack, op string))
- func InterpretWithContext(ctx context.Context, strm Value, do func(stk *Stack, op string))
- func InterpretWithContextAndLimits(ctx context.Context, strm Value, do func(stk *Stack, op string), ...)
- func IsSameSentence(last, current Text) bool
- func JoinZeroCopy(parts []string, sep string) string
- func OptimizedStartup(config *StartupConfig) error
- func PreallocateCache(fontCacheSize, resultCacheSize int)
- func ProcessLargePDF(reader *Reader, chunkSize, bufferSize int, maxMemory int64, ...) error
- func ProcessTextWithMultiLanguage(reader *Reader) (map[Language][]ClassifiedBlock, error)
- func PutBlockSlice(s []ClassifiedBlock)
- func PutBuilder(b *strings.Builder)
- func PutByteBuffer(buf *[]byte)
- func PutPDFBuffer(b *buffer)
- func PutSizedBuffer(buf []byte)
- func PutSizedStringBuilder(sb *FastStringBuilder, estimatedSize int)
- func PutSizedTextSlice(slice []Text)
- func PutText(t *Text)
- func PutTextSlice(s []Text)
- func RadixSortFloat64(values []float64)
- func ResetSortingMetrics()
- func SmartTextRunsToPlain(texts []Text) string
- func SplitZeroCopy(s string, sep byte) []string
- func StringSliceToByteSlice(strings []string) [][]byte
- func StringToBytes(s string) []byte
- func SubstringZeroCopy(s string, start, end int) string
- func TrimSpaceZeroCopy(s string) string
- func WarmupGlobal(config *WarmupConfig) error
- func ZeroCopyStringSlice(data []byte, separators []byte) []string
- type AccessPattern
- type AccessPatternTracker
- type AdaptiveCapacityEstimator
- type AdaptiveProcessor
- type AdaptiveSorter
- type AsyncReader
- func (ar *AsyncReader) AsyncExtractStructured(ctx context.Context) (<-chan []ClassifiedBlock, <-chan error)
- func (ar *AsyncReader) AsyncExtractText(ctx context.Context) (<-chan string, <-chan error)
- func (ar *AsyncReader) AsyncExtractTextWithContext(ctx context.Context, opts ExtractOptions) (<-chan string, <-chan error)
- func (ar *AsyncReader) AsyncStream(ctx context.Context, processor func(Page, int) error) <-chan error
- func (ar *AsyncReader) StreamValueReader(ctx context.Context, v Value) (<-chan []byte, <-chan error)
- type AsyncReaderAt
- type BatchExtractOptions
- type BatchResult
- type BatchStringBuilder
- type BlockType
- type CacheContext
- type CacheEntry
- type CacheKeyGenerator
- func (ckg *CacheKeyGenerator) GenerateFullHash(data string) string
- func (ckg *CacheKeyGenerator) GeneratePageContentKey(pageNum int, readerHash string) string
- func (ckg *CacheKeyGenerator) GenerateReaderHash(reader *Reader) string
- func (ckg *CacheKeyGenerator) GenerateTextClassificationKey(pageNum int, readerHash string, processorParams string) string
- func (ckg *CacheKeyGenerator) GenerateTextOrderingKey(pageNum int, readerHash string, orderingParams string) string
- type CacheLineAlignedCounter
- type CacheLinePadded
- type CacheManager
- type CacheShard
- type CacheStats
- type CachedReader
- type ClassifiedBlock
- type ClassifiedBlockWithLanguage
- type Column
- type Columns
- type ConnectionPool
- type Content
- type EnhancedParallelProcessor
- func (epp *EnhancedParallelProcessor) ProcessPagesEnhanced(ctx context.Context, pages []Page, processorFunc func(Page) ([]Text, error)) ([][]Text, error)
- func (epp *EnhancedParallelProcessor) ProcessWithLoadBalancing(ctx context.Context, pages []Page, processorFunc func(Page) ([]Text, error)) ([][]Text, error)
- func (epp *EnhancedParallelProcessor) ProcessWithPipeline(ctx context.Context, pages []Page, stages []func(Page, []Text) ([]Text, error)) ([][]Text, error)
- type ExtractMode
- type ExtractOptions
- type ExtractResult
- type Extractor
- func (e *Extractor) Context(ctx context.Context) *Extractor
- func (e *Extractor) Extract() (*ExtractResult, error)
- func (e *Extractor) ExtractStructured() ([]ClassifiedBlock, error)
- func (e *Extractor) ExtractStyledTexts() ([]Text, error)
- func (e *Extractor) ExtractText() (string, error)
- func (e *Extractor) Mode(mode ExtractMode) *Extractor
- func (e *Extractor) Pages(pages ...int) *Extractor
- func (e *Extractor) SmartOrdering(enabled bool) *Extractor
- func (e *Extractor) Workers(n int) *Extractor
- type FastStringBuilder
- type Font
- type FontCache
- type FontCacheInterface
- type FontCacheStats
- type FontCacheType
- type FontPrefetcher
- type GlobalFontCache
- func (gfc *GlobalFontCache) Cleanup() int
- func (gfc *GlobalFontCache) Clear()
- func (gfc *GlobalFontCache) Get(key string) (*Font, bool)
- func (gfc *GlobalFontCache) GetOrCompute(key string, compute func() (*Font, error)) (*Font, error)
- func (gfc *GlobalFontCache) GetStats() FontCacheStats
- func (gfc *GlobalFontCache) Remove(key string)
- func (gfc *GlobalFontCache) Set(key string, font *Font)
- func (gfc *GlobalFontCache) StartCleanupRoutine(interval time.Duration) chan struct{}
- type GridKey
- type InplaceStringBuilder
- type KDNode
- type KDTree
- type Language
- type LanguageInfo
- type LanguageTextExtractor
- type LazyPage
- type LazyPageManager
- type LockFreeRingBuffer
- type MemoryArena
- type MemoryEfficientExtractor
- type Metadata
- type MultiLangProcessor
- func (mlp *MultiLangProcessor) DetectLanguage(text string) LanguageInfo
- func (mlp *MultiLangProcessor) GetLanguageConfidenceThreshold() float64
- func (mlp *MultiLangProcessor) GetLanguageName(lang Language) string
- func (mlp *MultiLangProcessor) GetSupportedLanguages() []Language
- func (mlp *MultiLangProcessor) IsEnglish(text string) bool
- func (mlp *MultiLangProcessor) IsFrench(text string) bool
- func (mlp *MultiLangProcessor) IsGerman(text string) bool
- func (mlp *MultiLangProcessor) IsSpanish(text string) bool
- func (mlp *MultiLangProcessor) ProcessTextWithLanguageDetection(texts []Text) []TextWithLanguage
- type MultiLanguageTextClassifier
- type MultiLevelCache
- type OptimizedFontCache
- func (ofc *OptimizedFontCache) Clear()
- func (ofc *OptimizedFontCache) Get(key string) (*Font, bool)
- func (ofc *OptimizedFontCache) GetOrCompute(key string, compute func() (*Font, error)) (*Font, error)
- func (ofc *OptimizedFontCache) GetStats() FontCacheStats
- func (ofc *OptimizedFontCache) Prefetch(keys []string, compute func(key string) (*Font, error))
- func (ofc *OptimizedFontCache) Remove(key string)
- func (ofc *OptimizedFontCache) Set(key string, font *Font)
- type OptimizedMemoryPool
- type OptimizedSorter
- func (os *OptimizedSorter) QuickSortTexts(texts []Text, less func(i, j int) bool)
- func (os *OptimizedSorter) SortTextHorizontalByOptimized(th TextHorizontal)
- func (os *OptimizedSorter) SortTextVerticalByOptimized(tv TextVertical)
- func (os *OptimizedSorter) SortTexts(texts []Text, less func(i, j int) bool)
- func (os *OptimizedSorter) SortTextsWithAlgorithm(texts []Text, less func(i, j int) bool, algorithm string)
- type OptimizedTextClusterSorter
- type Outline
- type PDFError
- type Page
- func (p Page) ClassifyTextBlocks() ([]ClassifiedBlock, error)
- func (p *Page) Cleanup()
- func (p Page) Content() Content
- func (p Page) Font(name string) Font
- func (p Page) Fonts() []string
- func (p Page) GetPlainText(fonts map[string]*Font) (string, error)
- func (p Page) GetPlainTextWithSmartOrdering(fonts map[string]*Font) (string, error)
- func (p Page) GetTextByColumn() (Columns, error)
- func (p Page) GetTextByRow() (Rows, error)
- func (p Page) OptimizedGetPlainText(fonts map[string]*Font) (string, error)
- func (p Page) OptimizedGetTextByColumn() (Columns, error)
- func (p Page) OptimizedGetTextByRow() (Rows, error)
- func (p Page) Resources() Value
- func (p *Page) SetFontCache(cache *GlobalFontCache)
- func (p *Page) SetFontCacheInterface(cache FontCacheInterface)
- type PageStream
- type ParallelExtractor
- type ParallelProcessor
- func (pp *ParallelProcessor) ProcessPages(ctx context.Context, pages []Page, processorFunc func(Page) ([]Text, error)) ([][]Text, error)
- func (pp *ParallelProcessor) ProcessTextBlocks(ctx context.Context, blocks []*TextBlock, ...) ([]*TextBlock, error)
- func (pp *ParallelProcessor) ProcessTextInParallel(ctx context.Context, texts []Text, processorFunc func(Text) (Text, error)) ([]Text, error)
- type ParallelTextExtractor
- type ParseLimits
- type PerformanceMetrics
- type Point
- type PoolStats
- type PoolWarmer
- type PrefetchItem
- type PrefetchQueue
- type PrefetchStats
- type RTreeNode
- type RTreeSpatialIndex
- type Reader
- func (r *Reader) BatchExtractText(pageNums []int, useLazy bool) (map[int]string, error)
- func (r *Reader) ClearCache()
- func (r *Reader) Close() error
- func (r *Reader) ExtractAllPagesParallel(ctx context.Context, workers int) ([]string, error)
- func (r *Reader) ExtractPagesBatch(opts BatchExtractOptions) <-chan BatchResult
- func (r *Reader) ExtractPagesBatchToString(opts BatchExtractOptions) (string, error)
- func (r *Reader) ExtractStructuredBatch(opts BatchExtractOptions) <-chan StructuredBatchResult
- func (r *Reader) ExtractWithContext(ctx context.Context, opts ExtractOptions) (io.Reader, error)
- func (r *Reader) GetCacheCapacity() int
- func (r *Reader) GetMetadata() (Metadata, error)
- func (r *Reader) GetPlainText() (reader io.Reader, err error)
- func (r *Reader) GetPlainTextConcurrent(workers int) (io.Reader, error)
- func (r *Reader) GetStyledTexts() (sentences []Text, err error)
- func (r *Reader) NumPage() int
- func (r *Reader) Outline() Outline
- func (r *Reader) Page(num int) Page
- func (r *Reader) SetCacheCapacity(n int)
- func (r *Reader) SetMetadata(meta Metadata) error
- func (r *Reader) Trailer() Value
- type Rect
- type ResourceManager
- type ResultCache
- func (rc *ResultCache) Clear()
- func (rc *ResultCache) Close()
- func (rc *ResultCache) Get(key string) (interface{}, bool)
- func (rc *ResultCache) GetHitRatio() float64
- func (rc *ResultCache) GetStats() CacheStats
- func (rc *ResultCache) Has(key string) bool
- func (rc *ResultCache) Put(key string, value interface{})
- func (rc *ResultCache) Remove(key string) bool
- type Row
- type Rows
- type ShardedCache
- type ShardedCacheEntry
- type ShardedCacheStats
- type SizedBytePool
- type SizedPool
- type SizedTextSlicePool
- type SortStrategy
- type SortingMetrics
- type SpatialIndex
- type SpatialIndexInterface
- type Stack
- type StartupConfig
- type StreamProcessor
- func (sp *StreamProcessor) Close()
- func (sp *StreamProcessor) ProcessPageStream(reader *Reader, handler func(PageStream) error) error
- func (sp *StreamProcessor) ProcessTextBlockStream(reader *Reader, handler func(TextBlockStream) error) error
- func (sp *StreamProcessor) ProcessTextStream(reader *Reader, handler func(TextStream) error) error
- type StreamingBatchExtractor
- type StreamingMetadataExtractor
- type StreamingTextClassifier
- type StreamingTextExtractor
- func (e *StreamingTextExtractor) Close()
- func (e *StreamingTextExtractor) GetProgress() float64
- func (e *StreamingTextExtractor) NextBatch() (results map[int]string, hasMore bool, err error)
- func (e *StreamingTextExtractor) NextPage() (pageNum int, text string, hasMore bool, err error)
- func (e *StreamingTextExtractor) Reset()
- type StringBuffer
- func (sb *StringBuffer) Bytes() []byte
- func (sb *StringBuffer) Cap() int
- func (sb *StringBuffer) Len() int
- func (sb *StringBuffer) Reset()
- func (sb *StringBuffer) String() string
- func (sb *StringBuffer) StringCopy() string
- func (sb *StringBuffer) WriteByte(b byte) error
- func (sb *StringBuffer) WriteBytes(b []byte)
- func (sb *StringBuffer) WriteString(s string)
- type StringBuilderPool
- type StringPool
- type StructuredBatchResult
- type Task
- type Text
- type TextBlock
- type TextBlockStream
- type TextClassifier
- type TextEncoding
- type TextHorizontal
- type TextStream
- type TextVertical
- type TextWithLanguage
- type Value
- func (v Value) Bool() bool
- func (v Value) Float64() float64
- func (v Value) Index(i int) Value
- func (v Value) Int64() int64
- func (v Value) IsNull() bool
- func (v Value) Key(key string) Value
- func (v Value) Keys() []string
- func (v Value) Kind() ValueKind
- func (v Value) Len() int
- func (v Value) Name() string
- func (v Value) RawString() string
- func (v Value) Reader() io.ReadCloser
- func (v Value) String() string
- func (v Value) Text() string
- func (v Value) TextFromUTF16() string
- type ValueKind
- type WSDeque
- type WSTask
- type WSWorker
- type WarmupConfig
- type WarmupStats
- type WorkStealingExecutor
- type WorkStealingScheduler
- type Worker
- type WorkerPool
- type WorkerPoolStats
- type YBand
- type ZeroCopyBuilder
- Bugs
Examples ¶
- Package (ZeroCopyInPDFProcessing)
- BatchExtractOptions (OptimizedCache)
- BatchExtractOptions (StandardCache)
- FastStringConcatZC
- GetGlobalFontCache
- GlobalFontCache
- JoinZeroCopy
- ParallelExtractor (Basic)
- Reader.ExtractAllPagesParallel
- Reader.ExtractPagesBatch
- Reader.ExtractPagesBatchToString
- SplitZeroCopy
- StreamingBatchExtractor
- StringBuffer
- StringPool
- TrimSpaceZeroCopy
Constants ¶
This section is empty.
Variables ¶
var ( // ErrInvalidFont indicates a font definition is malformed or unsupported ErrInvalidFont = errors.New("invalid or unsupported font") // ErrUnsupportedEncoding indicates the character encoding is not supported ErrUnsupportedEncoding = errors.New("unsupported character encoding") // ErrMalformedStream indicates a content stream is malformed ErrMalformedStream = errors.New("malformed content stream") // ErrInvalidPage indicates an invalid page number or corrupted page ErrInvalidPage = errors.New("invalid page") // ErrEncrypted indicates the PDF is encrypted and cannot be read without a password ErrEncrypted = errors.New("PDF is encrypted") // ErrCorrupted indicates the PDF file structure is corrupted ErrCorrupted = errors.New("PDF file is corrupted") // ErrUnsupportedVersion indicates the PDF version is not supported ErrUnsupportedVersion = errors.New("unsupported PDF version") // ErrNoContent indicates the page has no content ErrNoContent = errors.New("page has no content") )
Common errors
var DebugOn = false
DebugOn is responsible for logging messages into stdout. If problems arise during reading, set it true.
var ErrContextCancelled = errors.New("pdf: context cancelled")
ErrContextCancelled is returned when a context is cancelled during PDF processing
var ErrInvalidPassword = fmt.Errorf("encrypted PDF: invalid password")
var ErrMaxParseTimeExceeded = errors.New("pdf: max parse time exceeded")
ErrMaxParseTimeExceeded is returned when max parse time is exceeded
var ErrMemoryLimitExceeded = errors.New("pdf: stream processor memory limit exceeded")
var ErrTimeout = errors.New("pdf: operation timeout")
ErrTimeout is returned when processing times out
var GlobalMetrics = &PerformanceMetrics{}
Global performance metrics instance
var GlobalPoolWarmer = &PoolWarmer{
bytePool: globalSizedBytePool,
textPool: globalSizedTextSlicePool,
}
GlobalPoolWarmer global pool warmer instance
Functions ¶
func AutoWarmup ¶ added in v1.0.2
func AutoWarmup() error
AutoWarmup automatic warmup (selects config based on available memory)
func BatchCompareFloat64 ¶
9. SIMD-friendly batch operations (pseudocode, actual assembly needed)
func BatchHexDecode ¶ added in v1.1.6
BatchHexDecode processes multiple hex strings in parallel using SIMD operations
func BenchmarkSortingAlgorithms ¶ added in v1.0.1
BenchmarkSortingAlgorithms compares performance of different algorithms
func BytesToString ¶ added in v1.0.2
BytesToString zero-copy conversion from []byte to string Warning: The returned string directly references the underlying byte array, do not modify the original []byte
func ClearGlobalStringPool ¶ added in v1.0.2
func ClearGlobalStringPool()
ClearGlobalStringPool clears the global string pool
func CompareStringsZeroCopy ¶ added in v1.0.2
CompareStringsZeroCopy zero-copy string comparison Returns -1 (s1 < s2), 0 (s1 == s2), 1 (s1 > s2)
func EstimateCapacity ¶
EstimateCapacity provides better capacity estimation for slices
func FastHexValidation ¶ added in v1.1.6
FastHexValidation performs SIMD-style validation of hex strings
func FastSortTexts ¶ added in v1.0.1
FastSortTexts sorts texts using the fastest algorithm for the comparison function
func FastSortTextsByX ¶ added in v1.0.1
func FastSortTextsByX(texts []Text)
FastSortTextsByX sorts texts by X coordinate using the fastest algorithm
func FastSortTextsByY ¶ added in v1.0.1
func FastSortTextsByY(texts []Text)
FastSortTextsByY sorts texts by Y coordinate using the fastest algorithm
func FastStringConcat ¶
FastStringConcat concatenates strings with optimized memory allocation
func FastStringConcatZC ¶ added in v1.0.2
FastStringConcatZC fast concatenation of multiple strings (zero-copy version)
Example ¶
ExampleFastStringConcatZC Demonstrate fast string concatenation
result := FastStringConcatZC("Hello", " ", "World", "!")
fmt.Println(result)
Output: Hello World!
func FastStringSearch ¶
FastStringSearch performs optimized string search using SIMD-like operations This is a simplified implementation that can be extended with actual SIMD instructions
func GetBuilder ¶
GetBuilder retrieves a strings.Builder from the pool
func GetByteBuffer ¶
func GetByteBuffer() *[]byte
GetByteBuffer retrieves a byte buffer from the pool
func GetSizedBuffer ¶ added in v1.0.1
GetSizedBuffer retrieves a byte buffer from the global sized pool This is a convenience function for common use cases
func HasPrefixZeroCopy ¶ added in v1.0.2
HasPrefixZeroCopy zero-copy prefix check
func HasSuffixZeroCopy ¶ added in v1.0.2
HasSuffixZeroCopy zero-copy suffix check
func HexDecodeSIMD ¶ added in v1.1.6
HexDecodeSIMD performs SIMD-optimized hex string decoding This function uses vectorized operations to decode hex strings efficiently
func HilbertXYToIndex ¶
8. Hilbert curve calculation (for spatial indexing)
func InternString ¶ added in v1.0.2
InternString adds string to global pool
func Interpret ¶
Interpret interprets the content in a stream as a basic PostScript program, pushing values onto a stack and then calling the do function to execute operators. The do function may push or pop values from the stack as needed to implement op.
Interpret handles the operators "dict", "currentdict", "begin", "end", "def", and "pop" itself.
Interpret is not a full-blown PostScript interpreter. Its job is to handle the very limited PostScript found in certain supporting file formats embedded in PDF files, such as cmap files that describe the mapping from font code points to Unicode code points.
A stream can also be represented by an array of streams that has to be handled as a single stream In the case of a simple stream read only once, otherwise get the length of the stream to handle it properly
There is no support for executable blocks, among other limitations.
func InterpretWithContext ¶ added in v1.1.5
InterpretWithContext is like Interpret but accepts a context for cancellation support. When the context is cancelled, interpretation stops and returns.
func InterpretWithContextAndLimits ¶ added in v1.1.5
func InterpretWithContextAndLimits(ctx context.Context, strm Value, do func(stk *Stack, op string), limits *ParseLimits)
InterpretWithContextAndLimits is like InterpretWithContext but also accepts parse limits.
func IsSameSentence ¶
isSameSentence checks if the current text segment likely belongs to the same sentence as the last text segment based on font, size, vertical position, and lack of sentence-ending punctuation in the last segment.
func JoinZeroCopy ¶ added in v1.0.2
JoinZeroCopy zero-copy string joining (single allocation)
Example ¶
ExampleJoinZeroCopy Demonstrate zero-copy joining
parts := []string{"apple", "banana", "cherry"}
result := JoinZeroCopy(parts, ", ")
fmt.Println(result)
Output: apple, banana, cherry
func OptimizedStartup ¶ added in v1.0.2
func OptimizedStartup(config *StartupConfig) error
OptimizedStartup optimized startup process includes pool warmup, cache pre-allocation, etc.
func PreallocateCache ¶ added in v1.0.2
func PreallocateCache(fontCacheSize, resultCacheSize int)
PreallocateCache pre-allocates cache (additional feature)
func ProcessLargePDF ¶
func ProcessLargePDF(reader *Reader, chunkSize, bufferSize int, maxMemory int64, handler func(PageStream) error) error
ProcessLargePDF handles very large PDFs with streaming
func ProcessTextWithMultiLanguage ¶
func ProcessTextWithMultiLanguage(reader *Reader) (map[Language][]ClassifiedBlock, error)
ProcessTextWithMultiLanguage handles multi-language text processing for the entire PDF
func PutBlockSlice ¶
func PutBlockSlice(s []ClassifiedBlock)
PutBlockSlice returns a ClassifiedBlock slice to the pool
func PutBuilder ¶
PutBuilder returns a strings.Builder to the pool after resetting it
func PutByteBuffer ¶
func PutByteBuffer(buf *[]byte)
PutByteBuffer returns a byte buffer to the pool
func PutPDFBuffer ¶
func PutPDFBuffer(b *buffer)
PutPDFBuffer returns a PDF buffer to the pool after resetting
func PutSizedBuffer ¶ added in v1.0.1
func PutSizedBuffer(buf []byte)
PutSizedBuffer returns a byte buffer to the global sized pool This is a convenience function for common use cases
func PutSizedStringBuilder ¶ added in v1.0.1
func PutSizedStringBuilder(sb *FastStringBuilder, estimatedSize int)
PutSizedStringBuilder returns a string builder to the appropriate pool
func PutSizedTextSlice ¶ added in v1.0.1
func PutSizedTextSlice(slice []Text)
PutSizedTextSlice returns a Text slice to the global pool
func ResetSortingMetrics ¶ added in v1.0.1
func ResetSortingMetrics()
ResetSortingMetrics resets the sorting metrics
func SmartTextRunsToPlain ¶
SmartTextRunsToPlain converts text runs to plain text using improved ordering
func SplitZeroCopy ¶ added in v1.0.2
SplitZeroCopy zero-copy string splitting Strings in the returned slice are all slices of the original string
Example ¶
ExampleSplitZeroCopy Demonstrate zero-copy splitting
str := "a,b,c,d"
parts := SplitZeroCopy(str, ',')
for _, part := range parts {
fmt.Println(part)
}
Output: a b c d
func StringSliceToByteSlice ¶ added in v1.0.2
StringSliceToByteSlice zero-copy conversion of each string in []string Each element in the returned [][]byte is read-only
func StringToBytes ¶ added in v1.0.2
StringToBytes zero-copy conversion from string to []byte Warning: The returned []byte is read-only, do not modify
func SubstringZeroCopy ¶ added in v1.0.2
SubstringZeroCopy zero-copy substring extraction Actually all string slicing in Go is already zero-copy
func TrimSpaceZeroCopy ¶ added in v1.0.2
TrimSpaceZeroCopy zero-copy trim leading and trailing spaces
Example ¶
ExampleTrimSpaceZeroCopy Demonstrate zero-copy space trimming
str := " hello world " result := TrimSpaceZeroCopy(str) fmt.Println(result)
Output: hello world
func WarmupGlobal ¶ added in v1.0.2
func WarmupGlobal(config *WarmupConfig) error
WarmupGlobal warms up global memory pool (convenience function)
func ZeroCopyStringSlice ¶
ZeroCopyStringSlice creates a string slice without copying data WARNING: This is unsafe and the returned strings share memory with the input
Types ¶
type AccessPattern ¶ added in v1.0.2
type AccessPattern struct {
// contains filtered or unexported fields
}
AccessPattern records access pattern of single font
type AccessPatternTracker ¶ added in v1.0.2
type AccessPatternTracker struct {
// contains filtered or unexported fields
}
AccessPatternTracker tracks font access patterns
type AdaptiveCapacityEstimator ¶
type AdaptiveCapacityEstimator struct {
// contains filtered or unexported fields
}
AdaptiveCapacityEstimator adaptive capacity estimator Dynamically adjusts pre-allocated capacity based on historical data, reducing reallocation
func NewAdaptiveCapacityEstimator ¶
func NewAdaptiveCapacityEstimator(maxSamples int) *AdaptiveCapacityEstimator
NewAdaptiveCapacityEstimator creates new adaptive estimator
func (*AdaptiveCapacityEstimator) Estimate ¶
func (ace *AdaptiveCapacityEstimator) Estimate(hint int) int
Estimate estimates required capacity based on historical data
func (*AdaptiveCapacityEstimator) Record ¶
func (ace *AdaptiveCapacityEstimator) Record(actual int)
Record records actual capacity used
type AdaptiveProcessor ¶ added in v1.0.2
type AdaptiveProcessor struct {
// contains filtered or unexported fields
}
AdaptiveProcessor adaptive processor Automatically adjusts concurrency level based on system load
func NewAdaptiveProcessor ¶ added in v1.0.2
func NewAdaptiveProcessor(min, max int) *AdaptiveProcessor
NewAdaptiveProcessor creates adaptive processor
func (*AdaptiveProcessor) AdjustWorkers ¶ added in v1.0.2
func (ap *AdaptiveProcessor) AdjustWorkers()
AdjustWorkers adjusts worker count based on system load
func (*AdaptiveProcessor) GetWorkerCount ¶ added in v1.0.2
func (ap *AdaptiveProcessor) GetWorkerCount() int
GetWorkerCount gets current worker goroutine count
type AdaptiveSorter ¶ added in v1.0.1
type AdaptiveSorter struct {
// contains filtered or unexported fields
}
AdaptiveSorter selects the best sorting algorithm based on data characteristics
func NewAdaptiveSorter ¶ added in v1.0.1
func NewAdaptiveSorter() *AdaptiveSorter
NewAdaptiveSorter creates a new adaptive sorter with default thresholds
func (*AdaptiveSorter) SortTextsByComparison ¶ added in v1.0.1
func (as *AdaptiveSorter) SortTextsByComparison(texts []Text, less func(i, j int) bool)
SortTextsByComparison sorts texts using a comparison function
func (*AdaptiveSorter) SortTextsByCoordinate ¶ added in v1.0.1
func (as *AdaptiveSorter) SortTextsByCoordinate(texts []Text, getCoord func(Text) float64)
SortTextsByCoordinate sorts texts by a numeric coordinate using the best algorithm
type AsyncReader ¶
type AsyncReader struct {
*Reader
// contains filtered or unexported fields
}
AsyncReader wraps a Reader to provide asynchronous operations
func NewAsyncReader ¶
func NewAsyncReader(reader *Reader) *AsyncReader
NewAsyncReader creates a new async reader with async I/O support
func (*AsyncReader) AsyncExtractStructured ¶
func (ar *AsyncReader) AsyncExtractStructured(ctx context.Context) (<-chan []ClassifiedBlock, <-chan error)
AsyncExtractStructured extracts structured text asynchronously
func (*AsyncReader) AsyncExtractText ¶
func (ar *AsyncReader) AsyncExtractText(ctx context.Context) (<-chan string, <-chan error)
AsyncExtractText extracts text from all pages asynchronously
func (*AsyncReader) AsyncExtractTextWithContext ¶
func (ar *AsyncReader) AsyncExtractTextWithContext(ctx context.Context, opts ExtractOptions) (<-chan string, <-chan error)
AsyncExtractTextWithContext extracts text with cancellation and timeout support
func (*AsyncReader) AsyncStream ¶
func (ar *AsyncReader) AsyncStream(ctx context.Context, processor func(Page, int) error) <-chan error
AsyncStream processes the PDF file with async I/O operations
func (*AsyncReader) StreamValueReader ¶
func (ar *AsyncReader) StreamValueReader(ctx context.Context, v Value) (<-chan []byte, <-chan error)
StreamValueReader provides async streaming of value data
type AsyncReaderAt ¶
type AsyncReaderAt struct {
// contains filtered or unexported fields
}
AsyncReaderAt provides async I/O for low-level file operations
func NewAsyncReaderAt ¶
func NewAsyncReaderAt(reader io.ReaderAt) *AsyncReaderAt
NewAsyncReaderAt creates a new async reader with async I/O support
func (*AsyncReaderAt) ReadAtAsync ¶
func (ara *AsyncReaderAt) ReadAtAsync(ctx context.Context, buf []byte, offset int64) (<-chan int, <-chan error)
ReadAtAsync reads from the file asynchronously
type BatchExtractOptions ¶ added in v1.0.1
type BatchExtractOptions struct {
// Pages to extract (nil means all pages)
Pages []int
// Number of concurrent workers (0 = NumCPU)
Workers int
// Whether to use smart text ordering
SmartOrdering bool
// Context for cancellation
Context context.Context
// Buffer size for each page result (0 = default 2KB)
PageBufferSize int
// Whether to enable font cache for this batch (default: false)
// When enabled, a temporary font cache is created for the batch
// to reduce redundant font parsing across pages
UseFontCache bool
// Maximum number of fonts to cache (0 = default 1000)
// Only used when UseFontCache is true
FontCacheSize int
// FontCacheType specifies which cache implementation to use
// - FontCacheStandard: Standard implementation (default)
// - FontCacheOptimized: High-performance optimized cache (10-85x faster)
// Only used when UseFontCache is true
FontCacheType FontCacheType
// PageTimeout is the maximum time allowed for processing a single page
// If zero, defaults to 30 seconds. Set to negative value to disable.
PageTimeout time.Duration
// ParseLimits configures resource limits for parsing operations
// If nil, uses DefaultParseLimits()
ParseLimits *ParseLimits
}
BatchExtractOptions configures batch extraction behavior
Example (OptimizedCache) ¶
ExampleBatchExtractOptions_optimizedCache demonstrates using optimized cache
// This example shows how to use the optimized cache
opts := BatchExtractOptions{
Workers: 8,
SmartOrdering: true,
UseFontCache: true,
FontCacheType: FontCacheOptimized, // Optimized cache (10-85x faster)
FontCacheSize: 2000,
}
fmt.Printf("Cache type: Optimized, Size: %d\n", opts.FontCacheSize)
Output: Cache type: Optimized, Size: 2000
Example (StandardCache) ¶
ExampleBatchExtractOptions_standardCache demonstrates using standard cache
// This example shows how to use the standard cache
opts := BatchExtractOptions{
Workers: 4,
SmartOrdering: true,
UseFontCache: true,
FontCacheType: FontCacheStandard, // Standard cache
FontCacheSize: 1000,
}
fmt.Printf("Cache type: Standard, Size: %d\n", opts.FontCacheSize)
Output: Cache type: Standard, Size: 1000
type BatchResult ¶ added in v1.0.1
BatchResult contains the result of extracting a single page
type BatchStringBuilder ¶
type BatchStringBuilder struct {
// contains filtered or unexported fields
}
BatchStringBuilder batch string builder Avoids multiple reallocations by precisely calculating required capacity
func NewBatchStringBuilder ¶
func NewBatchStringBuilder(texts []Text) *BatchStringBuilder
NewBatchStringBuilder creates batch string builder
func (*BatchStringBuilder) AppendTexts ¶
func (bsb *BatchStringBuilder) AppendTexts(texts []Text) string
AppendTexts appends text content in batch
func (*BatchStringBuilder) Reset ¶
func (bsb *BatchStringBuilder) Reset()
Reset resets builder for reuse
func (*BatchStringBuilder) String ¶
func (bsb *BatchStringBuilder) String() string
String returns built string
type CacheContext ¶
type CacheContext struct {
// contains filtered or unexported fields
}
CacheContext provides a context-aware cache with automatic cleanup
func NewCacheContext ¶
func NewCacheContext(parent context.Context, cache *ResultCache) *CacheContext
NewCacheContext creates a new context-aware cache
func (*CacheContext) Close ¶
func (cc *CacheContext) Close()
Close releases resources used by the cache context
func (*CacheContext) GetWithTimeout ¶
func (cc *CacheContext) GetWithTimeout(key string, timeout time.Duration) (interface{}, bool, error)
GetWithTimeout gets a value with timeout
type CacheEntry ¶
type CacheEntry struct {
Data interface{}
Expiration time.Time
AccessCount int64
LastAccess time.Time
Size int64 // Estimated size in bytes
}
CacheEntry represents a cached item
func (*CacheEntry) IsExpired ¶
func (ce *CacheEntry) IsExpired() bool
IsExpired checks if the cache entry has expired
type CacheKeyGenerator ¶
type CacheKeyGenerator struct{}
CacheKeyGenerator provides functions to generate cache keys
func NewCacheKeyGenerator ¶
func NewCacheKeyGenerator() *CacheKeyGenerator
NewCacheKeyGenerator creates a new key generator
func (*CacheKeyGenerator) GenerateFullHash ¶
func (ckg *CacheKeyGenerator) GenerateFullHash(data string) string
GenerateFullHash generates a hash from arbitrary data
func (*CacheKeyGenerator) GeneratePageContentKey ¶
func (ckg *CacheKeyGenerator) GeneratePageContentKey(pageNum int, readerHash string) string
GeneratePageContentKey generates a cache key for page content
func (*CacheKeyGenerator) GenerateReaderHash ¶
func (ckg *CacheKeyGenerator) GenerateReaderHash(reader *Reader) string
GenerateReaderHash generates a hash for the reader object (simplified)
func (*CacheKeyGenerator) GenerateTextClassificationKey ¶
func (ckg *CacheKeyGenerator) GenerateTextClassificationKey(pageNum int, readerHash string, processorParams string) string
GenerateTextClassificationKey generates a cache key for text classification
func (*CacheKeyGenerator) GenerateTextOrderingKey ¶
func (ckg *CacheKeyGenerator) GenerateTextOrderingKey(pageNum int, readerHash string, orderingParams string) string
GenerateTextOrderingKey generates a cache key for text ordering
type CacheLineAlignedCounter ¶
type CacheLineAlignedCounter struct {
// contains filtered or unexported fields
}
func NewCacheLineAlignedCounter ¶
func NewCacheLineAlignedCounter(n int) *CacheLineAlignedCounter
func (*CacheLineAlignedCounter) Add ¶
func (c *CacheLineAlignedCounter) Add(idx int, delta uint64)
func (*CacheLineAlignedCounter) Get ¶
func (c *CacheLineAlignedCounter) Get(idx int) uint64
type CacheLinePadded ¶
type CacheLinePadded struct {
// contains filtered or unexported fields
}
4. Cache line aligned structure
type CacheManager ¶
type CacheManager struct {
// contains filtered or unexported fields
}
CacheManager provides centralized cache management
func NewCacheManager ¶
func NewCacheManager() *CacheManager
NewCacheManager creates a new cache manager with separate caches for different data types
func (*CacheManager) GetClassificationCache ¶
func (cm *CacheManager) GetClassificationCache() *ResultCache
GetClassificationCache returns the classification cache
func (*CacheManager) GetMetadataCache ¶
func (cm *CacheManager) GetMetadataCache() *ResultCache
GetMetadataCache returns the metadata cache
func (*CacheManager) GetPageCache ¶
func (cm *CacheManager) GetPageCache() *ResultCache
GetPageCache returns the page content cache
func (*CacheManager) GetTextOrderingCache ¶
func (cm *CacheManager) GetTextOrderingCache() *ResultCache
GetTextOrderingCache returns the text ordering cache
func (*CacheManager) GetTotalStats ¶
func (cm *CacheManager) GetTotalStats() CacheStats
GetTotalStats returns combined statistics for all caches
type CacheShard ¶ added in v1.0.2
type CacheShard struct {
// contains filtered or unexported fields
}
CacheShard represents a single shard of the cache
type CacheStats ¶
type CacheStats struct {
Hits int64
Misses int64
Evictions int64
CurrentSize int64
MaxSize int64
Entries int64
}
CacheStats provides statistics about cache performance
type CachedReader ¶
type CachedReader struct {
*Reader
// contains filtered or unexported fields
}
CachedReader wraps a Reader to provide caching functionality
func NewCachedReader ¶
func NewCachedReader(reader *Reader, cache *ResultCache) *CachedReader
NewCachedReader creates a new cached reader
func (*CachedReader) CachedClassifyTextBlocks ¶
func (cr *CachedReader) CachedClassifyTextBlocks(pageNum int) ([]ClassifiedBlock, error)
CachedClassifyTextBlocks returns classified text blocks with caching
func (*CachedReader) CachedPage ¶
func (cr *CachedReader) CachedPage(pageNum int) ([]Text, error)
CachedPage returns page content with caching
type ClassifiedBlock ¶
type ClassifiedBlock struct {
Type BlockType // Semantic type of the block
Level int // Hierarchy level (for titles: 1=h1, 2=h2, etc.)
Content []Text // Text runs in this block
Bounds Rect // Bounding box
Text string // Concatenated text content
}
ClassifiedBlock represents a classified block of text with semantic information
func GetBlockSlice ¶
func GetBlockSlice() []ClassifiedBlock
GetBlockSlice retrieves a ClassifiedBlock slice from the pool
func GetTextByType ¶
func GetTextByType(blocks []ClassifiedBlock, blockType BlockType) []ClassifiedBlock
GetTextByType returns all text blocks of a specific type
func GetTitles ¶
func GetTitles(blocks []ClassifiedBlock, level int) []ClassifiedBlock
GetTitles returns all title blocks, optionally filtered by level
type ClassifiedBlockWithLanguage ¶
type ClassifiedBlockWithLanguage struct {
ClassifiedBlock
Language LanguageInfo
}
ClassifiedBlockWithLanguage represents a classified block with language information
type Column ¶
type Column struct {
Position int64
Content TextVertical
}
Column represents the contents of a column
type ConnectionPool ¶
type ConnectionPool struct {
// contains filtered or unexported fields
}
ConnectionPool manages a pool of connections/resources
func NewConnectionPool ¶
func NewConnectionPool(maxSize int, newFunc func() interface{}, closeFunc func(interface{})) *ConnectionPool
NewConnectionPool creates a new connection pool
func (*ConnectionPool) Close ¶
func (cp *ConnectionPool) Close()
Close closes all connections in the pool
func (*ConnectionPool) Get ¶
func (cp *ConnectionPool) Get() interface{}
Get retrieves a connection from the pool
func (*ConnectionPool) Put ¶
func (cp *ConnectionPool) Put(conn interface{})
Put returns a connection to the pool
type EnhancedParallelProcessor ¶ added in v1.0.2
type EnhancedParallelProcessor struct {
// contains filtered or unexported fields
}
EnhancedParallelProcessor enhanced parallel processor Provides better concurrency control, load balancing, and error handling
func NewEnhancedParallelProcessor ¶ added in v1.0.2
func NewEnhancedParallelProcessor(workers int, batchSize int) *EnhancedParallelProcessor
NewEnhancedParallelProcessor creates enhanced parallel processor
func (*EnhancedParallelProcessor) ProcessPagesEnhanced ¶ added in v1.0.2
func (epp *EnhancedParallelProcessor) ProcessPagesEnhanced( ctx context.Context, pages []Page, processorFunc func(Page) ([]Text, error), ) ([][]Text, error)
ProcessPagesEnhanced processes pages in parallel with enhancements
type ExtractMode ¶
type ExtractMode int
ExtractMode specifies the type of extraction to perform
const ( ModePlain ExtractMode = iota // Plain text extraction ModeStyled // Text with style information ModeStructured // Structured text with classification )
type ExtractOptions ¶
type ExtractOptions struct {
Workers int // Number of concurrent workers (0 = use NumCPU)
PageRange []int // Specific pages to extract (nil = all pages)
}
ExtractOptions configures text extraction behavior
type ExtractResult ¶
type ExtractResult struct {
Text string // Plain text (for ModePlain)
StyledTexts []Text // Styled texts (for ModeStyled)
ClassifiedBlocks []ClassifiedBlock // Classified blocks (for ModeStructured)
Metadata Metadata // Document metadata
PageCount int // Total number of pages
}
ExtractResult contains the results of text extraction
type Extractor ¶
type Extractor struct {
// contains filtered or unexported fields
}
Extractor provides a builder pattern for configuring and executing extraction
func NewExtractor ¶
NewExtractor creates a new extractor for the given reader
func (*Extractor) Extract ¶
func (e *Extractor) Extract() (*ExtractResult, error)
Extract performs the extraction and returns the result
func (*Extractor) ExtractStructured ¶
func (e *Extractor) ExtractStructured() ([]ClassifiedBlock, error)
ExtractStructured is a convenience method for extracting structured text
func (*Extractor) ExtractStyledTexts ¶
ExtractStyledTexts is a convenience method for extracting styled texts
func (*Extractor) ExtractText ¶
ExtractText is a convenience method for extracting plain text
func (*Extractor) Mode ¶
func (e *Extractor) Mode(mode ExtractMode) *Extractor
Mode sets the extraction mode
func (*Extractor) SmartOrdering ¶
SmartOrdering enables smart text ordering for multi-column layouts
type FastStringBuilder ¶
type FastStringBuilder struct {
// contains filtered or unexported fields
}
FastStringBuilder provides optimized string building with pre-allocation
func GetSizedStringBuilder ¶ added in v1.0.1
func GetSizedStringBuilder(estimatedSize int) *FastStringBuilder
GetSizedStringBuilder retrieves a string builder from the appropriate pool
func NewFastStringBuilder ¶
func NewFastStringBuilder(estimatedSize int) *FastStringBuilder
NewFastStringBuilder creates a builder with estimated capacity
func (*FastStringBuilder) Len ¶
func (b *FastStringBuilder) Len() int
Len returns the current length
func (*FastStringBuilder) String ¶
func (b *FastStringBuilder) String() string
String returns the accumulated string
func (*FastStringBuilder) WriteByte ¶
func (b *FastStringBuilder) WriteByte(c byte) error
WriteByte appends a byte
func (*FastStringBuilder) WriteString ¶
func (b *FastStringBuilder) WriteString(s string)
WriteString appends a string
type Font ¶
type Font struct {
V Value
// contains filtered or unexported fields
}
A Font represent a font in a PDF file. The methods interpret a Font dictionary stored in V.
func (*Font) Encoder ¶
func (f *Font) Encoder() TextEncoding
Encoder returns the encoding between font code point sequences and UTF-8. Pointer receiver is required so the computed encoder is cached on the shared Font instance instead of a copy. The previous value-receiver implementation rebuilt the encoder for every call, causing large allocations to pile up during batch extraction.
type FontCache ¶
type FontCache struct {
// contains filtered or unexported fields
}
FontCache stores parsed fonts to avoid re-parsing across pages
type FontCacheInterface ¶ added in v1.0.1
type FontCacheInterface interface {
Get(key string) (*Font, bool)
Set(key string, font *Font)
Clear()
GetStats() FontCacheStats
}
FontCacheInterface defines the common interface for font caches
type FontCacheStats ¶ added in v1.0.1
type FontCacheStats struct {
Entries int
MaxEntries int
Hits uint64
Misses uint64
HitRate float64
AvgAccesses float64
}
Stats returns cache statistics
type FontCacheType ¶ added in v1.0.1
type FontCacheType int
FontCacheType specifies which font cache implementation to use
const ( // FontCacheStandard uses the standard GlobalFontCache (default) // - Stable and well-tested // - Good performance for most use cases // - Simpler implementation FontCacheStandard FontCacheType = iota // FontCacheOptimized uses the OptimizedFontCache // - 10-85x faster than standard (depending on workload) // - Lock-free read path with 16 shards // - Best for high-concurrency scenarios (>1000 qps) // - Recommended for production environments with heavy load FontCacheOptimized )
type FontPrefetcher ¶ added in v1.0.2
type FontPrefetcher struct {
// contains filtered or unexported fields
}
FontPrefetcher implements intelligent font prefetch strategy Based on access pattern prediction and preloading potentially needed fonts
func NewFontPrefetcher ¶ added in v1.0.2
func NewFontPrefetcher(cache *OptimizedFontCache) *FontPrefetcher
NewFontPrefetcher create new font prefetcher
func (*FontPrefetcher) ClearPatterns ¶ added in v1.0.2
func (fp *FontPrefetcher) ClearPatterns()
ClearPatterns clears access patterns
func (*FontPrefetcher) Close ¶ added in v1.0.2
func (fp *FontPrefetcher) Close()
Close closes the prefetcher
func (*FontPrefetcher) Disable ¶ added in v1.0.2
func (fp *FontPrefetcher) Disable()
Disable disables prefetching
func (*FontPrefetcher) Enable ¶ added in v1.0.2
func (fp *FontPrefetcher) Enable()
Enable enables prefetching
func (*FontPrefetcher) GetStats ¶ added in v1.0.2
func (fp *FontPrefetcher) GetStats() PrefetchStats
GetStats gets prefetch statistics
func (*FontPrefetcher) RecordAccess ¶ added in v1.0.2
func (fp *FontPrefetcher) RecordAccess(fontKey string, relatedKeys []string)
RecordAccess record font access
type GlobalFontCache ¶ added in v1.0.1
type GlobalFontCache struct {
// contains filtered or unexported fields
}
GlobalFontCache implements an enhanced global font cache with: - LRU eviction for memory control - Hit/miss statistics for monitoring - Content-based hashing for accurate cache keys
Example ¶
// Create a cache with max 100 entries and 1 hour expiration
cache := NewGlobalFontCache(100, 1*time.Hour)
// Store a font
font := &Font{}
cache.Set("MyFont", font)
// Retrieve the font
retrieved, ok := cache.Get("MyFont")
if ok {
fmt.Println("Font found in cache")
_ = retrieved
}
// Get statistics
stats := cache.GetStats()
fmt.Printf("Cache entries: %d, Hit rate: %.2f%%\n",
stats.Entries, stats.HitRate*100)
func GetGlobalFontCache ¶ added in v1.0.1
func GetGlobalFontCache() *GlobalFontCache
GetGlobalFontCache returns the global font cache instance
Example ¶
// Get the global singleton instance
cache := GetGlobalFontCache()
font := &Font{}
cache.Set("GlobalFont", font)
// The same instance can be accessed from anywhere
sameCacheInstance := GetGlobalFontCache()
retrieved, _ := sameCacheInstance.Get("GlobalFont")
_ = retrieved
func NewGlobalFontCache ¶ added in v1.0.1
func NewGlobalFontCache(maxEntries int, maxAge time.Duration) *GlobalFontCache
NewGlobalFontCache creates a new global font cache
func (*GlobalFontCache) Cleanup ¶ added in v1.0.1
func (gfc *GlobalFontCache) Cleanup() int
Cleanup removes expired entries
func (*GlobalFontCache) Clear ¶ added in v1.0.1
func (gfc *GlobalFontCache) Clear()
Clear removes all fonts from the cache
func (*GlobalFontCache) Get ¶ added in v1.0.1
func (gfc *GlobalFontCache) Get(key string) (*Font, bool)
Get retrieves a font from the cache
func (*GlobalFontCache) GetOrCompute ¶ added in v1.0.1
GetOrCompute retrieves a font from cache or computes it if not present This is a convenience function that combines Get and Set
func (*GlobalFontCache) GetStats ¶ added in v1.0.1
func (gfc *GlobalFontCache) GetStats() FontCacheStats
GetStats returns current cache statistics
func (*GlobalFontCache) Remove ¶ added in v1.0.1
func (gfc *GlobalFontCache) Remove(key string)
Remove removes a font from the cache
func (*GlobalFontCache) Set ¶ added in v1.0.1
func (gfc *GlobalFontCache) Set(key string, font *Font)
Set stores a font in the cache
func (*GlobalFontCache) StartCleanupRoutine ¶ added in v1.0.1
func (gfc *GlobalFontCache) StartCleanupRoutine(interval time.Duration) chan struct{}
StartCleanupRoutine starts a background goroutine to periodically clean up expired entries
type InplaceStringBuilder ¶ added in v1.0.2
type InplaceStringBuilder struct {
// contains filtered or unexported fields
}
InplaceStringBuilder in-place string builder Avoid intermediate allocations
func NewInplaceStringBuilder ¶ added in v1.0.2
func NewInplaceStringBuilder(capacity int) *InplaceStringBuilder
NewInplaceStringBuilder create new in-place string builder
func (*InplaceStringBuilder) Append ¶ added in v1.0.2
func (isb *InplaceStringBuilder) Append(s string)
Append append string
func (*InplaceStringBuilder) Build ¶ added in v1.0.2
func (isb *InplaceStringBuilder) Build() string
Build build final string (single allocation)
func (*InplaceStringBuilder) Len ¶ added in v1.0.2
func (isb *InplaceStringBuilder) Len() int
Len return total length
func (*InplaceStringBuilder) Reset ¶ added in v1.0.2
func (isb *InplaceStringBuilder) Reset()
Reset reset builder
type KDTree ¶
type KDTree struct {
// contains filtered or unexported fields
}
KDTree KD tree spatial index For O(log n) time complexity nearest neighbor search
func BuildKDTree ¶
BuildKDTree builds KD tree from text blocks
type LanguageInfo ¶
type LanguageInfo struct {
Language Language
Confidence float64 // Confidence level (0.0 to 1.0)
Characters []rune // Unique characters in the text
WordCount int // Number of words in the text
SentenceCount int // Number of sentences in the text
}
LanguageInfo contains information about a detected language
type LanguageTextExtractor ¶
type LanguageTextExtractor struct {
// contains filtered or unexported fields
}
LanguageTextExtractor extracts text while detecting languages
func NewLanguageTextExtractor ¶
func NewLanguageTextExtractor() *LanguageTextExtractor
NewLanguageTextExtractor creates a new language-aware text extractor
func (*LanguageTextExtractor) ExtractTextByLanguage ¶
func (lte *LanguageTextExtractor) ExtractTextByLanguage(reader *Reader) (map[Language][]Text, error)
ExtractTextByLanguage extracts text grouped by detected language
func (*LanguageTextExtractor) GetLanguageStats ¶
func (lte *LanguageTextExtractor) GetLanguageStats(texts []Text) map[Language]int
GetLanguageStats returns statistics about languages detected in the text
func (*LanguageTextExtractor) GetTextsByLanguage ¶
func (lte *LanguageTextExtractor) GetTextsByLanguage(texts []Text, targetLang Language) []Text
GetTextsByLanguage returns text elements filtered by specific language
type LazyPage ¶
type LazyPage struct {
// contains filtered or unexported fields
}
LazyPage provides lazy loading of page content to reduce memory usage for large PDFs where not all pages need to be processed
func NewLazyPage ¶
NewLazyPage creates a lazy-loading page wrapper
func (*LazyPage) GetContent ¶
GetContent loads and returns the page content (cached after first call)
type LazyPageManager ¶
type LazyPageManager struct {
// contains filtered or unexported fields
}
LazyPageManager manages lazy loading of multiple pages
func NewLazyPageManager ¶
func NewLazyPageManager(r *Reader, maxCached int) *LazyPageManager
NewLazyPageManager creates a manager with LRU cache
func (*LazyPageManager) GetPage ¶
func (m *LazyPageManager) GetPage(pageNum int) *LazyPage
GetPage returns a lazy page, loading it if necessary
func (*LazyPageManager) GetStats ¶
func (m *LazyPageManager) GetStats() (totalPages, loadedPages int)
GetStats returns cache statistics
type LockFreeRingBuffer ¶
type LockFreeRingBuffer struct {
// contains filtered or unexported fields
}
3. Lock-free ring buffer (for producer-consumer)
func NewLockFreeRingBuffer ¶
func NewLockFreeRingBuffer(size int) *LockFreeRingBuffer
func (*LockFreeRingBuffer) Pop ¶
func (rb *LockFreeRingBuffer) Pop() (interface{}, bool)
func (*LockFreeRingBuffer) Push ¶
func (rb *LockFreeRingBuffer) Push(item interface{}) bool
type MemoryArena ¶
type MemoryArena struct {
// contains filtered or unexported fields
}
10. Memory pool manager (reduce GC pressure)
func NewMemoryArena ¶
func NewMemoryArena(chunkSize int) *MemoryArena
func (*MemoryArena) Alloc ¶
func (a *MemoryArena) Alloc(size int) []byte
func (*MemoryArena) Reset ¶
func (a *MemoryArena) Reset()
type MemoryEfficientExtractor ¶
type MemoryEfficientExtractor struct {
// contains filtered or unexported fields
}
MemoryEfficientExtractor provides memory-efficient extraction using streaming
func NewMemoryEfficientExtractor ¶
func NewMemoryEfficientExtractor(chunkSize, bufferSize int, maxMemory int64) *MemoryEfficientExtractor
NewMemoryEfficientExtractor creates a new memory-efficient extractor
func (*MemoryEfficientExtractor) ExtractTextStream ¶
func (mee *MemoryEfficientExtractor) ExtractTextStream(reader *Reader) (<-chan TextStream, <-chan error)
ExtractTextStream extracts text in a memory-efficient streaming way
func (*MemoryEfficientExtractor) ExtractTextToWriter ¶
func (mee *MemoryEfficientExtractor) ExtractTextToWriter(reader *Reader, writer io.Writer) (err error)
ExtractTextToWriter extracts text directly to an io.Writer to minimize memory usage
type Metadata ¶
type Metadata struct {
Title string // Document title
Author string // Author name
Subject string // Document subject
Keywords []string // Keywords
Creator string // Application that created the document
Producer string // PDF producer (converter)
CreationDate time.Time // Creation date
ModDate time.Time // Last modification date
Trapped string // Trapping information (True/False/Unknown)
Custom map[string]string // Custom metadata fields
}
Metadata represents PDF document metadata
type MultiLangProcessor ¶
type MultiLangProcessor struct {
// contains filtered or unexported fields
}
MultiLangProcessor provides multi-language text processing
func NewMultiLangProcessor ¶
func NewMultiLangProcessor() *MultiLangProcessor
NewMultiLangProcessor creates a new multi-language processor
func (*MultiLangProcessor) DetectLanguage ¶
func (mlp *MultiLangProcessor) DetectLanguage(text string) LanguageInfo
DetectLanguage detects the language of a given text
func (*MultiLangProcessor) GetLanguageConfidenceThreshold ¶
func (mlp *MultiLangProcessor) GetLanguageConfidenceThreshold() float64
GetLanguageConfidenceThreshold returns a confidence threshold for reliable detection
func (*MultiLangProcessor) GetLanguageName ¶
func (mlp *MultiLangProcessor) GetLanguageName(lang Language) string
GetLanguageName returns the full name of a language
func (*MultiLangProcessor) GetSupportedLanguages ¶
func (mlp *MultiLangProcessor) GetSupportedLanguages() []Language
GetSupportedLanguages returns the list of supported languages
func (*MultiLangProcessor) IsEnglish ¶
func (mlp *MultiLangProcessor) IsEnglish(text string) bool
IsEnglish checks if text is likely English
func (*MultiLangProcessor) IsFrench ¶
func (mlp *MultiLangProcessor) IsFrench(text string) bool
IsFrench checks if text is likely French
func (*MultiLangProcessor) IsGerman ¶
func (mlp *MultiLangProcessor) IsGerman(text string) bool
IsGerman checks if text is likely German
func (*MultiLangProcessor) IsSpanish ¶
func (mlp *MultiLangProcessor) IsSpanish(text string) bool
IsSpanish checks if text is likely Spanish
func (*MultiLangProcessor) ProcessTextWithLanguageDetection ¶
func (mlp *MultiLangProcessor) ProcessTextWithLanguageDetection(texts []Text) []TextWithLanguage
ProcessTextWithLanguageDetection processes text with language detection
type MultiLanguageTextClassifier ¶
type MultiLanguageTextClassifier struct {
*TextClassifier
// contains filtered or unexported fields
}
MultiLanguageTextClassifier extends the text classifier with language awareness
func NewMultiLanguageTextClassifier ¶
func NewMultiLanguageTextClassifier(texts []Text, pageWidth, pageHeight float64) *MultiLanguageTextClassifier
NewMultiLanguageTextClassifier creates a new multi-language text classifier
func (*MultiLanguageTextClassifier) ClassifyBlocksWithLanguage ¶
func (mltc *MultiLanguageTextClassifier) ClassifyBlocksWithLanguage() []ClassifiedBlockWithLanguage
ClassifyBlocksWithLanguage extends the classification with language information
type MultiLevelCache ¶
type MultiLevelCache struct {
// contains filtered or unexported fields
}
MultiLevelCache multi-level cache manager
func NewMultiLevelCache ¶
func NewMultiLevelCache() *MultiLevelCache
NewMultiLevelCache create multi-level cache
func (*MultiLevelCache) Get ¶
func (mlc *MultiLevelCache) Get(key string) (interface{}, bool)
Get get data from cache
func (*MultiLevelCache) Prefetch ¶
func (mlc *MultiLevelCache) Prefetch(keys []string)
Prefetch prefetch page data
func (*MultiLevelCache) Put ¶
func (mlc *MultiLevelCache) Put(key string, value interface{})
Put store in cache
func (*MultiLevelCache) Stats ¶
func (mlc *MultiLevelCache) Stats() map[string]uint64
Stats get cache statistics
type OptimizedFontCache ¶ added in v1.0.1
type OptimizedFontCache struct {
// contains filtered or unexported fields
}
OptimizedFontCache implements an ultra-high-performance font cache with: - Lock-free read path using atomic operations - Sharded design to reduce lock contention (16 shards) - Zero-allocation fast path for cache hits - Inline LRU using lock-free linked list approximation - Pre-allocated pools for metadata structs - SIMD-friendly memory layout
func NewOptimizedFontCache ¶ added in v1.0.1
func NewOptimizedFontCache(totalCapacity int) *OptimizedFontCache
NewOptimizedFontCache creates a new optimized font cache
func (*OptimizedFontCache) Clear ¶ added in v1.0.1
func (ofc *OptimizedFontCache) Clear()
Clear removes all entries from all shards
func (*OptimizedFontCache) Get ¶ added in v1.0.1
func (ofc *OptimizedFontCache) Get(key string) (*Font, bool)
Get retrieves a font from the cache (lock-free fast path)
func (*OptimizedFontCache) GetOrCompute ¶ added in v1.0.1
func (ofc *OptimizedFontCache) GetOrCompute(key string, compute func() (*Font, error)) (*Font, error)
GetOrCompute retrieves a font from cache or computes it if not present
func (*OptimizedFontCache) GetStats ¶ added in v1.0.1
func (ofc *OptimizedFontCache) GetStats() FontCacheStats
GetStats returns aggregated statistics across all shards
func (*OptimizedFontCache) Prefetch ¶ added in v1.0.1
func (ofc *OptimizedFontCache) Prefetch(keys []string, compute func(key string) (*Font, error))
Prefetch warms up the cache with multiple keys concurrently
func (*OptimizedFontCache) Remove ¶ added in v1.0.1
func (ofc *OptimizedFontCache) Remove(key string)
Remove removes a specific key from the cache
func (*OptimizedFontCache) Set ¶ added in v1.0.1
func (ofc *OptimizedFontCache) Set(key string, font *Font)
Set stores a font in the cache
type OptimizedMemoryPool ¶
type OptimizedMemoryPool struct {
// contains filtered or unexported fields
}
OptimizedMemoryPool provides better memory pool management
func NewOptimizedMemoryPool ¶
func NewOptimizedMemoryPool(size int) *OptimizedMemoryPool
NewOptimizedMemoryPool creates a pool with size tracking
func (*OptimizedMemoryPool) Get ¶
func (omp *OptimizedMemoryPool) Get() []byte
Get retrieves a buffer from the pool
func (*OptimizedMemoryPool) Put ¶
func (omp *OptimizedMemoryPool) Put(bufPtr *[]byte)
Put returns a buffer to the pool, resetting it
type OptimizedSorter ¶
type OptimizedSorter struct {
// contains filtered or unexported fields
}
OptimizedSorter provides optimized sorting algorithms for large text collections
func NewOptimizedSorter ¶
func NewOptimizedSorter() *OptimizedSorter
NewOptimizedSorter creates a new optimized sorter
func (*OptimizedSorter) QuickSortTexts ¶
func (os *OptimizedSorter) QuickSortTexts(texts []Text, less func(i, j int) bool)
QuickSortTexts implements quicksort for text collections
func (*OptimizedSorter) SortTextHorizontalByOptimized ¶
func (os *OptimizedSorter) SortTextHorizontalByOptimized(th TextHorizontal)
SortTextHorizontalByOptimized sorts TextHorizontal using optimized algorithm
func (*OptimizedSorter) SortTextVerticalByOptimized ¶
func (os *OptimizedSorter) SortTextVerticalByOptimized(tv TextVertical)
SortTextVerticalByOptimized sorts TextVertical using optimized algorithm
func (*OptimizedSorter) SortTexts ¶
func (os *OptimizedSorter) SortTexts(texts []Text, less func(i, j int) bool)
SortTexts sorts a collection of texts using the most appropriate algorithm
func (*OptimizedSorter) SortTextsWithAlgorithm ¶
func (os *OptimizedSorter) SortTextsWithAlgorithm(texts []Text, less func(i, j int) bool, algorithm string)
SortTextsWithAlgorithm allows choosing a specific sorting algorithm
type OptimizedTextClusterSorter ¶
type OptimizedTextClusterSorter struct {
// contains filtered or unexported fields
}
OptimizedTextClusterSorter provides optimized sorting for text clusters
func NewOptimizedTextClusterSorter ¶
func NewOptimizedTextClusterSorter() *OptimizedTextClusterSorter
NewOptimizedTextClusterSorter creates a new optimized cluster sorter
func (*OptimizedTextClusterSorter) SortTextBlocks ¶
func (otcs *OptimizedTextClusterSorter) SortTextBlocks(blocks []*TextBlock, sortBy string)
SortTextBlocks sorts text blocks by various criteria
type Outline ¶
An Outline is a tree describing the outline (also known as the table of contents) of a document.
type PDFError ¶
type PDFError struct {
Op string // Operation that failed (e.g., "extract text", "parse font")
Page int // Page number where error occurred (0 if not page-specific)
Path string // File path if applicable
Err error // Underlying error
}
PDFError represents an error that occurred during PDF processing. It includes contextual information about where the error occurred.
type Page ¶
type Page struct {
V Value
// contains filtered or unexported fields
}
A Page represent a single page in a PDF file. The methods interpret a Page dictionary stored in V.
func (Page) ClassifyTextBlocks ¶
func (p Page) ClassifyTextBlocks() ([]ClassifiedBlock, error)
ClassifyTextBlocks is a convenience function that creates a classifier and runs classification
func (*Page) Cleanup ¶ added in v1.0.7
func (p *Page) Cleanup()
Cleanup releases resources held by the Page, specifically the fontCache reference. Call this after processing a page to prevent memory leaks in batch operations. This method is safe to call multiple times.
func (Page) GetPlainText ¶
GetPlainText returns the page's all text without format. fonts can be passed in (to improve parsing performance) or left nil
func (Page) GetPlainTextWithSmartOrdering ¶
GetPlainTextWithSmartOrdering extracts plain text using an improved text ordering algorithm that handles multi-column layouts and complex reading orders.
func (Page) GetTextByColumn ¶
GetTextByColumn returns the page's all text grouped by column
func (Page) GetTextByRow ¶
GetTextByRow returns the page's all text grouped by rows
func (Page) OptimizedGetPlainText ¶
OptimizedGetPlainText returns the page's all text using optimized string building. This version uses object pools and pre-allocation to reduce memory allocations.
func (Page) OptimizedGetTextByColumn ¶
OptimizedGetTextByColumn returns the page's all text grouped by column using optimized allocation
func (Page) OptimizedGetTextByRow ¶
OptimizedGetTextByRow returns the page's all text grouped by rows using optimized allocation
func (*Page) SetFontCache ¶ added in v1.0.1
func (p *Page) SetFontCache(cache *GlobalFontCache)
SetFontCache sets a font cache for this page to improve performance during text extraction by reusing parsed fonts. Deprecated: Use SetFontCacheInterface for better flexibility.
func (*Page) SetFontCacheInterface ¶ added in v1.0.1
func (p *Page) SetFontCacheInterface(cache FontCacheInterface)
SetFontCacheInterface sets a font cache using the interface This supports both GlobalFontCache and OptimizedFontCache
type PageStream ¶
PageStream represents a stream of pages
type ParallelExtractor ¶ added in v1.0.2
type ParallelExtractor struct {
// contains filtered or unexported fields
}
ParallelExtractor parallel extractor Advanced extraction interface combining all optimizations
Example (Basic) ¶
ExampleParallelExtractor_basic basic usage example
// Create parallel extractor
extractor := NewParallelExtractor(4) // use 4 worker goroutines
defer extractor.Close()
// Note: actual usage requires creating Page objects
// pages := []Page{...}
ctx := context.Background()
// Simulate empty page list
var pages []Page
// Extract all pages
results, err := extractor.ExtractAllPages(ctx, pages)
if err != nil {
fmt.Printf("Error: %v\n", err)
return
}
fmt.Printf("Extracted %d pages\n", len(results))
Output: Extracted 0 pages
func NewParallelExtractor ¶ added in v1.0.2
func NewParallelExtractor(workers int) *ParallelExtractor
NewParallelExtractor creates parallel extractor
func (*ParallelExtractor) Close ¶ added in v1.0.2
func (pe *ParallelExtractor) Close()
Close closes and cleans up resources
func (*ParallelExtractor) ExtractAllPages ¶ added in v1.0.2
func (pe *ParallelExtractor) ExtractAllPages( ctx context.Context, pages []Page, ) ([][]Text, error)
ExtractAllPages extracts all pages (using all optimizations)
func (*ParallelExtractor) GetCacheStats ¶ added in v1.0.2
func (pe *ParallelExtractor) GetCacheStats() ShardedCacheStats
GetCacheStats gets cache statistics
func (*ParallelExtractor) GetPrefetchStats ¶ added in v1.0.2
func (pe *ParallelExtractor) GetPrefetchStats() PrefetchStats
GetPrefetchStats gets prefetch statistics
type ParallelProcessor ¶
type ParallelProcessor struct {
// contains filtered or unexported fields
}
ParallelProcessor handles multi-level parallel processing for PDF text extraction
func NewParallelProcessor ¶
func NewParallelProcessor(workers int) *ParallelProcessor
NewParallelProcessor creates a new parallel processor with the specified number of workers
func (*ParallelProcessor) ProcessPages ¶
func (pp *ParallelProcessor) ProcessPages(ctx context.Context, pages []Page, processorFunc func(Page) ([]Text, error)) ([][]Text, error)
ProcessPages processes multiple pages in parallel
type ParallelTextExtractor ¶
type ParallelTextExtractor struct {
// contains filtered or unexported fields
}
ParallelTextExtractor provides multi-level parallel extraction
func NewParallelTextExtractor ¶
func NewParallelTextExtractor(workers int) *ParallelTextExtractor
NewParallelTextExtractor creates a new parallel text extractor
func (*ParallelTextExtractor) ExtractWithParallelProcessing ¶
func (pte *ParallelTextExtractor) ExtractWithParallelProcessing(ctx context.Context, reader *Reader) ([]Text, error)
ExtractWithParallelProcessing extracts text using multi-level parallel processing
func (*ParallelTextExtractor) ParallelSort ¶
func (pte *ParallelTextExtractor) ParallelSort(ctx context.Context, texts []Text, less func(i, j int) bool) error
ParallelSort provides parallel sorting for large text collections
type ParseLimits ¶ added in v1.1.5
type ParseLimits struct {
// MaxParseTime is the maximum time allowed for parsing a single page (0 = no limit)
MaxParseTime time.Duration
// MaxHexStringBytes is the maximum size for a single hex string (0 = no limit, default 10MB)
MaxHexStringBytes int
// MaxStreamBytes is the maximum size for a single stream (0 = no limit)
MaxStreamBytes int64
// CheckInterval specifies how often to check for cancellation during intensive loops
// Higher values improve performance but reduce responsiveness to cancellation
// Default: 1000 iterations
CheckInterval int
}
ParseLimits defines resource limits for PDF parsing operations
func DefaultParseLimits ¶ added in v1.1.5
func DefaultParseLimits() ParseLimits
DefaultParseLimits returns sensible default limits
type PerformanceMetrics ¶
type PerformanceMetrics struct {
ExtractDuration atomic.Int64 // nanoseconds
ParseDuration atomic.Int64
SortDuration atomic.Int64
TotalAllocs atomic.Uint64
BytesAllocated atomic.Uint64
GoroutineCount atomic.Int32
CacheHitRate atomic.Uint64 // percentage * 100
}
PerformanceMetrics performance metrics collector
func (*PerformanceMetrics) GetMetrics ¶
func (pm *PerformanceMetrics) GetMetrics() map[string]interface{}
GetMetrics get current metrics snapshot
func (*PerformanceMetrics) RecordAllocation ¶
func (pm *PerformanceMetrics) RecordAllocation(bytes uint64)
RecordAllocation record memory allocation
func (*PerformanceMetrics) RecordExtractDuration ¶
func (pm *PerformanceMetrics) RecordExtractDuration(d time.Duration)
RecordExtractDuration record extraction duration
type PoolStats ¶ added in v1.0.1
GetStats returns statistics about pool usage (for debugging/monitoring)
type PoolWarmer ¶ added in v1.0.2
type PoolWarmer struct {
// contains filtered or unexported fields
}
PoolWarmer memory pool warmer Pre-allocate and fill memory pools at application startup to reduce runtime allocation overhead
func (*PoolWarmer) GetWarmupStats ¶ added in v1.0.2
func (pw *PoolWarmer) GetWarmupStats() WarmupStats
GetWarmupStats gets warmup statistics
func (*PoolWarmer) IsWarmed ¶ added in v1.0.2
func (pw *PoolWarmer) IsWarmed() bool
IsWarmed checks if warmed up
func (*PoolWarmer) Warmup ¶ added in v1.0.2
func (pw *PoolWarmer) Warmup(config *WarmupConfig) error
Warmup performs memory pool warmup
type PrefetchItem ¶ added in v1.0.2
type PrefetchItem struct {
// contains filtered or unexported fields
}
PrefetchItem prefetch item
type PrefetchQueue ¶ added in v1.0.2
type PrefetchQueue struct {
// contains filtered or unexported fields
}
PrefetchQueue prefetch queue (priority queue)
func (*PrefetchQueue) Len ¶ added in v1.0.2
func (pq *PrefetchQueue) Len() int
func (*PrefetchQueue) Less ¶ added in v1.0.2
func (pq *PrefetchQueue) Less(i, j int) bool
func (*PrefetchQueue) Pop ¶ added in v1.0.2
func (pq *PrefetchQueue) Pop() interface{}
func (*PrefetchQueue) Push ¶ added in v1.0.2
func (pq *PrefetchQueue) Push(x interface{})
func (*PrefetchQueue) Swap ¶ added in v1.0.2
func (pq *PrefetchQueue) Swap(i, j int)
type PrefetchStats ¶ added in v1.0.2
PrefetchStats prefetch statistics
type RTreeNode ¶
type RTreeNode struct {
// contains filtered or unexported fields
}
RTreeNode represents a node in the R-tree
type RTreeSpatialIndex ¶
type RTreeSpatialIndex struct {
// contains filtered or unexported fields
}
RTreeSpatialIndex provides a more sophisticated spatial index using a proper R-tree implementation
func NewRTreeSpatialIndex ¶
func NewRTreeSpatialIndex(texts []Text) *RTreeSpatialIndex
NewRTreeSpatialIndex creates a new R-tree based spatial index
func (*RTreeSpatialIndex) Insert ¶
func (rt *RTreeSpatialIndex) Insert(text Text)
Insert adds a text element to the R-tree
func (*RTreeSpatialIndex) Query ¶
func (rt *RTreeSpatialIndex) Query(bounds Rect) []Text
Query returns all text elements that intersect with the given bounds
type Reader ¶
type Reader struct {
// contains filtered or unexported fields
}
A Reader is a single PDF file open for reading.
func NewReaderEncrypted ¶
NewReaderEncrypted opens a file for reading, using the data in f with the given total size. If the PDF is encrypted, NewReaderEncrypted calls pw repeatedly to obtain passwords to try. If pw returns the empty string, NewReaderEncrypted stops trying to decrypt the file and returns an error.
func NewReaderEncryptedWithMmap ¶
NewReaderEncryptedWithMmap opens a file for reading with memory mapping for large files. If the file size exceeds 10MB, it uses memory mapping to reduce memory usage. This is a wrapper around NewReaderEncrypted that optimizes for large files.
func (*Reader) BatchExtractText ¶
BatchExtractText extracts text from multiple pages using lazy loading and object pooling This is optimized for processing many pages without keeping all in memory
func (*Reader) ClearCache ¶ added in v1.0.6
func (r *Reader) ClearCache()
ClearCache clears the object cache, releasing all cached objects. This is useful for freeing memory after batch processing large PDFs.
func (*Reader) Close ¶ added in v1.0.2
Close closes the Reader and releases associated resources. If the underlying ReaderAt implements io.Closer, it will be closed.
func (*Reader) ExtractAllPagesParallel ¶ added in v1.0.2
ExtractAllPagesParallel extract all page texts using enhanced parallel extractor This method integrates all performance optimizations: sharded cache, font prefetch, zero-copy, etc.
Example ¶
ExampleReader_ExtractAllPagesParallel uses Reader's parallel extraction method
// Note: this example requires actual PDF files
// Here only shows API usage
/*
// Open PDF file
f, r, err := Open("document.pdf")
if err != nil {
panic(err)
}
defer f.Close()
// Create context
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Minute)
defer cancel()
// Parallel extract all page texts
pages, err := r.ExtractAllPagesParallel(ctx, 0) // 0 = auto-detect CPU core count
if err != nil {
panic(err)
}
// Output text for each page
for i, pageText := range pages {
fmt.Printf("Page %d has %d characters\n", i+1, len(pageText))
}
*/
func (*Reader) ExtractPagesBatch ¶ added in v1.0.1
func (r *Reader) ExtractPagesBatch(opts BatchExtractOptions) <-chan BatchResult
ExtractPagesBatch extracts text from multiple pages in batches This is optimized for high-throughput scenarios with many pages
Example ¶
// This example shows how to use batch extraction
// (requires a real PDF file to run)
// r, err := Open("document.pdf")
// if err != nil {
// log.Fatal(err)
// }
// defer r.Close()
//
// opts := BatchExtractOptions{
// Workers: 4,
// Pages: []int{1, 2, 3, 4, 5}, // Extract first 5 pages
// }
//
// for result := range r.ExtractPagesBatch(opts) {
// if result.Error != nil {
// log.Printf("Error on page %d: %v", result.PageNum, result.Error)
// continue
// }
// fmt.Printf("Page %d: %d characters\n", result.PageNum, len(result.Text))
// }
func (*Reader) ExtractPagesBatchToString ¶ added in v1.0.1
func (r *Reader) ExtractPagesBatchToString(opts BatchExtractOptions) (string, error)
ExtractPagesBatchToString is a convenience function that collects all results into a single string
Example ¶
// This example shows how to extract all pages to a single string
// r, err := Open("document.pdf")
// if err != nil {
// log.Fatal(err)
// }
// defer r.Close()
//
// opts := BatchExtractOptions{
// Workers: 8,
// SmartOrdering: true,
// }
//
// text, err := r.ExtractPagesBatchToString(opts)
// if err != nil {
// log.Fatal(err)
// }
//
// fmt.Printf("Extracted %d characters from %d pages\n", len(text), r.NumPage())
func (*Reader) ExtractStructuredBatch ¶ added in v1.0.1
func (r *Reader) ExtractStructuredBatch(opts BatchExtractOptions) <-chan StructuredBatchResult
ExtractStructuredBatch extracts structured text in batches
func (*Reader) ExtractWithContext ¶
ExtractWithContext extracts plain text from all pages with cancellation support
func (*Reader) GetCacheCapacity ¶ added in v1.0.6
GetCacheCapacity returns the current object cache capacity. Returns 0 if no capacity limit is set (unbounded cache).
func (*Reader) GetMetadata ¶
GetMetadata extracts metadata from the PDF document
func (*Reader) GetPlainText ¶
GetPlainText returns all the text in the PDF file
func (*Reader) GetPlainTextConcurrent ¶
GetPlainTextConcurrent extracts all pages concurrently using the specified number of workers.
func (*Reader) GetStyledTexts ¶
GetStyledTexts returns list all sentences in an array, that are included styles
func (*Reader) Outline ¶
Outline returns the document outline. The Outline returned is the root of the outline tree and typically has no Title itself. That is, the children of the returned root are the top-level entries in the outline.
func (*Reader) Page ¶
Page returns the page for the given page number. Page numbers are indexed starting at 1, not 0. If the page is not found, Page returns a Page with p.V.IsNull().
func (*Reader) SetCacheCapacity ¶
func (*Reader) SetMetadata ¶
SetMetadata sets metadata fields in the PDF (for future write support) Currently not implemented as the library is read-only
type ResourceManager ¶
type ResourceManager struct {
// contains filtered or unexported fields
}
ResourceManager provides automatic resource cleanup
func NewResourceManager ¶
func NewResourceManager() *ResourceManager
NewResourceManager creates a new resource manager
func (*ResourceManager) Add ¶
func (rm *ResourceManager) Add(resource io.Closer)
Add adds a resource to be managed
func (*ResourceManager) Close ¶
func (rm *ResourceManager) Close() error
Close closes all managed resources
type ResultCache ¶
type ResultCache struct {
// contains filtered or unexported fields
}
ResultCache provides caching for parsed and classified results
func GetGlobalCache ¶
func GetGlobalCache() *ResultCache
GetGlobalCache returns a singleton cache instance
func NewResultCache ¶
func NewResultCache(maxSize int64, ttl time.Duration, policy string) *ResultCache
NewResultCache creates a new result cache with specified parameters
func (*ResultCache) Close ¶ added in v1.0.5
func (rc *ResultCache) Close()
Close stops the cleanup goroutine and releases resources
func (*ResultCache) Get ¶
func (rc *ResultCache) Get(key string) (interface{}, bool)
Get retrieves an item from the cache
func (*ResultCache) GetHitRatio ¶
func (rc *ResultCache) GetHitRatio() float64
GetHitRatio returns the cache hit ratio
func (*ResultCache) GetStats ¶
func (rc *ResultCache) GetStats() CacheStats
GetStats returns cache statistics
func (*ResultCache) Has ¶
func (rc *ResultCache) Has(key string) bool
Has checks if a key exists in the cache (without updating access stats)
func (*ResultCache) Put ¶
func (rc *ResultCache) Put(key string, value interface{})
Put adds an item to the cache
func (*ResultCache) Remove ¶
func (rc *ResultCache) Remove(key string) bool
Remove removes an item from the cache
type Row ¶
type Row struct {
Position int64
Content TextHorizontal
}
Row represents the contents of a row
type ShardedCache ¶ added in v1.0.2
type ShardedCache struct {
// contains filtered or unexported fields
}
ShardedCache implements a high-performance sharded cache with the following features: - 256 shards to minimize lock contention - Independent locks and LRU linked lists for each shard - Statistics implemented with atomic operations - Adaptive eviction strategy
func NewShardedCache ¶ added in v1.0.2
func NewShardedCache(maxSize int, ttl time.Duration) *ShardedCache
NewShardedCache creates a new sharded cache
func (*ShardedCache) Close ¶ added in v1.0.5
func (sc *ShardedCache) Close()
Close stops cleanup goroutine and releases resources
func (*ShardedCache) Delete ¶ added in v1.0.2
func (sc *ShardedCache) Delete(key string)
Delete deletes cache entry
func (*ShardedCache) Get ¶ added in v1.0.2
func (sc *ShardedCache) Get(key string) (interface{}, bool)
Get gets value from cache
func (*ShardedCache) GetStats ¶ added in v1.0.2
func (sc *ShardedCache) GetStats() ShardedCacheStats
GetStats gets cache statistics
func (*ShardedCache) Set ¶ added in v1.0.2
func (sc *ShardedCache) Set(key string, value interface{}, size int64)
Set sets cache value
type ShardedCacheEntry ¶ added in v1.0.2
type ShardedCacheEntry struct {
// contains filtered or unexported fields
}
ShardedCacheEntry represents a cache entry
type ShardedCacheStats ¶ added in v1.0.2
type ShardedCacheStats struct {
Hits uint64
Misses uint64
Evictions uint64
Entries int64
Size int64
}
ShardedCacheStats cache statistics
type SizedBytePool ¶ added in v1.0.1
type SizedBytePool struct {
// contains filtered or unexported fields
}
SizedBytePool implements a multi-level size-bucketed object pool for byte slices. It reduces memory allocation overhead by reusing buffers of appropriate sizes.
Size buckets: 16B, 32B, 64B, 128B, 256B, 512B, 1KB, 4KB
func NewSizedBytePool ¶ added in v1.0.1
func NewSizedBytePool() *SizedBytePool
NewSizedBytePool creates a new sized byte pool with 8 size buckets
func (*SizedBytePool) Get ¶ added in v1.0.1
func (sp *SizedBytePool) Get(size int) []byte
Get retrieves a byte slice from the appropriate size bucket Returns a buffer with at least the requested capacity
func (*SizedBytePool) Put ¶ added in v1.0.1
func (sp *SizedBytePool) Put(buf []byte)
Put returns a byte slice to the appropriate pool The slice is cleared before being returned to the pool
type SizedPool ¶
type SizedPool struct {
// contains filtered or unexported fields
}
1. Fine-grained object pool - multi-level size bucketing
func NewSizedPool ¶
func NewSizedPool() *SizedPool
type SizedTextSlicePool ¶ added in v1.0.1
type SizedTextSlicePool struct {
// contains filtered or unexported fields
}
SizedTextSlicePool implements a size-bucketed pool for Text slices Similar to SizedBytePool but for []Text instead of []byte
func NewSizedTextSlicePool ¶ added in v1.0.1
func NewSizedTextSlicePool() *SizedTextSlicePool
NewSizedTextSlicePool creates a new sized text slice pool Buckets: 8, 16, 32, 64, 128, 256 texts
func (*SizedTextSlicePool) Get ¶ added in v1.0.1
func (sp *SizedTextSlicePool) Get(size int) []Text
Get retrieves a Text slice from the appropriate size bucket
func (*SizedTextSlicePool) Put ¶ added in v1.0.1
func (sp *SizedTextSlicePool) Put(slice []Text)
Put returns a Text slice to the appropriate pool
type SortStrategy ¶ added in v1.0.1
type SortStrategy int
SortStrategy represents different sorting algorithms available
const ( StrategyAuto SortStrategy = iota // Automatically select best algorithm StrategyRadix // Radix sort for numeric keys StrategyQuick // Quicksort for general comparison StrategyInsertion // Insertion sort for small arrays StrategyStandard // Go standard library sort )
type SortingMetrics ¶ added in v1.0.1
type SortingMetrics struct {
RadixSortCount int
QuickSortCount int
InsertionSortCount int
StandardSortCount int
}
SortingMetrics tracks performance of different sorting strategies
func GetSortingMetrics ¶ added in v1.0.1
func GetSortingMetrics() SortingMetrics
GetSortingMetrics returns current sorting metrics
type SpatialIndex ¶
type SpatialIndex struct {
// contains filtered or unexported fields
}
SpatialIndex provides spatial indexing for efficient text location queries This is a simple implementation using a grid-based approach; for production use, consider a more sophisticated structure like R-tree
func NewSpatialIndex ¶
func NewSpatialIndex(texts []Text) *SpatialIndex
NewSpatialIndex creates a new spatial index from text elements
func (*SpatialIndex) Query ¶
func (si *SpatialIndex) Query(bounds Rect) []Text
Query returns all text elements that potentially intersect with the given bounds
type SpatialIndexInterface ¶
SpatialIndex interface to allow using either grid or R-tree implementation
func NewSpatialIndexInterface ¶
func NewSpatialIndexInterface(texts []Text) SpatialIndexInterface
NewSpatialIndexInterface creates a spatial index interface (can be switched between implementations)
type Stack ¶
type Stack struct {
// contains filtered or unexported fields
}
A Stack represents a stack of values.
type StartupConfig ¶ added in v1.0.2
type StartupConfig struct {
WarmupPools bool
WarmupConfig *WarmupConfig
PreallocateCaches bool
FontCacheSize int
ResultCacheSize int
TuneGC bool
GCPercent int
MemoryBallast int64
SetMaxProcs bool
MaxProcs int
}
StartupConfig startup configuration
func DefaultStartupConfig ¶ added in v1.0.2
func DefaultStartupConfig() *StartupConfig
DefaultStartupConfig default startup configuration
type StreamProcessor ¶
type StreamProcessor struct {
// contains filtered or unexported fields
}
StreamProcessor handles streaming processing of PDF content to minimize memory usage
func NewStreamProcessor ¶
func NewStreamProcessor(chunkSize, bufferSize int, maxMemory int64) *StreamProcessor
NewStreamProcessor creates a new streaming processor
func (*StreamProcessor) Close ¶
func (sp *StreamProcessor) Close()
Close releases resources used by the stream processor
func (*StreamProcessor) ProcessPageStream ¶
func (sp *StreamProcessor) ProcessPageStream(reader *Reader, handler func(PageStream) error) error
ProcessPageStream processes pages in a streaming fashion
func (*StreamProcessor) ProcessTextBlockStream ¶
func (sp *StreamProcessor) ProcessTextBlockStream(reader *Reader, handler func(TextBlockStream) error) error
ProcessTextBlockStream processes text blocks in a streaming fashion
func (*StreamProcessor) ProcessTextStream ¶
func (sp *StreamProcessor) ProcessTextStream(reader *Reader, handler func(TextStream) error) error
ProcessTextStream processes text in a streaming fashion
type StreamingBatchExtractor ¶ added in v1.0.1
type StreamingBatchExtractor struct {
// contains filtered or unexported fields
}
StreamingBatchExtractor provides a streaming interface for batch extraction This is useful for very large PDFs where you want to process results as they arrive
Example ¶
// This example shows streaming batch extraction with a callback
// r, err := Open("document.pdf")
// if err != nil {
// log.Fatal(err)
// }
// defer r.Close()
//
// ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
// defer cancel()
//
// opts := BatchExtractOptions{
// Context: ctx,
// Workers: 4,
// }
//
// extractor := NewStreamingBatchExtractor(r, opts)
// extractor.Start()
//
// err = extractor.ProcessAll(func(result BatchResult) error {
// if result.Error != nil {
// return result.Error
// }
// // Process each page as it arrives
// fmt.Printf("Processing page %d...\n", result.PageNum)
// return nil
// })
//
// if err != nil {
// log.Fatal(err)
// }
func NewStreamingBatchExtractor ¶ added in v1.0.1
func NewStreamingBatchExtractor(r *Reader, opts BatchExtractOptions) *StreamingBatchExtractor
NewStreamingBatchExtractor creates a new streaming batch extractor
func (*StreamingBatchExtractor) Next ¶ added in v1.0.1
func (sbe *StreamingBatchExtractor) Next() *BatchResult
Next returns the next result, or nil if done
func (*StreamingBatchExtractor) ProcessAll ¶ added in v1.0.1
func (sbe *StreamingBatchExtractor) ProcessAll(callback func(BatchResult) error) error
ProcessAll processes all pages with a callback function
func (*StreamingBatchExtractor) Start ¶ added in v1.0.1
func (sbe *StreamingBatchExtractor) Start()
Start begins the extraction process
type StreamingMetadataExtractor ¶
type StreamingMetadataExtractor struct {
// contains filtered or unexported fields
}
StreamingMetadataExtractor extracts metadata in a streaming fashion
func NewStreamingMetadataExtractor ¶
func NewStreamingMetadataExtractor(chunkSize, bufferSize int, maxMemory int64) *StreamingMetadataExtractor
NewStreamingMetadataExtractor creates a new streaming metadata extractor
func (*StreamingMetadataExtractor) ExtractMetadataStream ¶
func (sme *StreamingMetadataExtractor) ExtractMetadataStream(reader *Reader) (<-chan Metadata, <-chan error)
ExtractMetadataStream extracts metadata in a streaming way
type StreamingTextClassifier ¶
type StreamingTextClassifier struct {
// contains filtered or unexported fields
}
StreamingTextClassifier classifies text in a streaming fashion to minimize memory usage
func NewStreamingTextClassifier ¶
func NewStreamingTextClassifier(chunkSize, bufferSize int, maxMemory int64) *StreamingTextClassifier
NewStreamingTextClassifier creates a new streaming text classifier
func (*StreamingTextClassifier) ClassifyTextStream ¶
func (stc *StreamingTextClassifier) ClassifyTextStream(reader *Reader) (<-chan ClassifiedBlock, <-chan error)
ClassifyTextStream classifies text in a streaming way
type StreamingTextExtractor ¶
type StreamingTextExtractor struct {
// contains filtered or unexported fields
}
StreamingTextExtractor provides memory-efficient text extraction for large PDFs
func NewStreamingTextExtractor ¶
func NewStreamingTextExtractor(r *Reader, maxCachedPages int) *StreamingTextExtractor
NewStreamingTextExtractor creates a streaming extractor for large PDFs
func (*StreamingTextExtractor) Close ¶
func (e *StreamingTextExtractor) Close()
Close releases resources used by the extractor
func (*StreamingTextExtractor) GetProgress ¶
func (e *StreamingTextExtractor) GetProgress() float64
GetProgress returns the extraction progress (0.0 to 1.0)
func (*StreamingTextExtractor) NextBatch ¶
func (e *StreamingTextExtractor) NextBatch() (results map[int]string, hasMore bool, err error)
NextBatch extracts text from the next batch of pages
func (*StreamingTextExtractor) NextPage ¶
func (e *StreamingTextExtractor) NextPage() (pageNum int, text string, hasMore bool, err error)
NextPage extracts text from the next page
func (*StreamingTextExtractor) Reset ¶
func (e *StreamingTextExtractor) Reset()
Reset resets the extractor to the beginning
type StringBuffer ¶ added in v1.0.2
type StringBuffer struct {
// contains filtered or unexported fields
}
StringBuffer string building buffer, optimizes multiple concatenations
Example ¶
ExampleStringBuffer Demonstrate usage of StringBuffer
builder := NewStringBuffer(100)
builder.WriteString("Hello")
builder.WriteByte(' ')
builder.WriteString("World")
result := builder.StringCopy()
fmt.Println(result)
Output: Hello World
func NewStringBuffer ¶ added in v1.0.2
func NewStringBuffer(capacity int) *StringBuffer
NewStringBuffer create new string buffer
func (*StringBuffer) Bytes ¶ added in v1.0.2
func (sb *StringBuffer) Bytes() []byte
Bytes return underlying byte slice
func (*StringBuffer) Len ¶ added in v1.0.2
func (sb *StringBuffer) Len() int
Len return current length
func (*StringBuffer) String ¶ added in v1.0.2
func (sb *StringBuffer) String() string
String zero-copy return string Warning: Do not use StringBuffer after return
func (*StringBuffer) StringCopy ¶ added in v1.0.2
func (sb *StringBuffer) StringCopy() string
StringCopy safely return string copy
func (*StringBuffer) WriteByte ¶ added in v1.0.2
func (sb *StringBuffer) WriteByte(b byte) error
WriteByte write single byte
func (*StringBuffer) WriteBytes ¶ added in v1.0.2
func (sb *StringBuffer) WriteBytes(b []byte)
WriteBytes write byte slice
func (*StringBuffer) WriteString ¶ added in v1.0.2
func (sb *StringBuffer) WriteString(s string)
WriteString write string
type StringBuilderPool ¶ added in v1.0.1
type StringBuilderPool struct {
// contains filtered or unexported fields
}
StringBuilderPool provides size-aware string builder pooling
type StringPool ¶ added in v1.0.2
type StringPool struct {
// contains filtered or unexported fields
}
StringPool string pool, reuse common strings
Example ¶
ExampleStringPool Demonstrate usage of string pool
pool := NewStringPool()
// Put commonly used strings into the pool
fontName1 := pool.Intern("Arial")
fontName2 := pool.Intern("Arial") // Repeated strings will be reused
fmt.Println(fontName1 == fontName2) // Pointers are equal
fmt.Println(pool.Size())
Output: true 1
func NewStringPool ¶ added in v1.0.2
func NewStringPool() *StringPool
NewStringPool create new string pool
func (*StringPool) Intern ¶ added in v1.0.2
func (sp *StringPool) Intern(s string) string
Intern add string to pool and return pooled version Strings with same content will share memory
func (*StringPool) Size ¶ added in v1.0.2
func (sp *StringPool) Size() int
Size return number of strings in pool
type StructuredBatchResult ¶ added in v1.0.1
type StructuredBatchResult struct {
PageNum int
Blocks []ClassifiedBlock
Error error
}
BatchExtractStructured extracts structured text from multiple pages in batches
type Text ¶
type Text struct {
Font string // the font used
FontSize float64 // the font size, in points (1/72 of an inch)
X float64 // the X coordinate, in points, increasing left to right
Y float64 // the Y coordinate, in points, increasing bottom to top
W float64 // the width of the text, in points
S string // the actual UTF-8 text
Vertical bool // whether the text is drawn vertically
Bold bool // whether the text is bold
Italic bool // whether the text is italic
Underline bool // whether the text is underlined
}
A Text represents a single piece of text drawn on a page.
func GetSizedTextSlice ¶ added in v1.0.1
GetSizedTextSlice retrieves a Text slice from the global pool
func GetText ¶
func GetText() *Text
GetText retrieves a Text object from the appropriate pool based on content size
func GetTextBySize ¶
GetTextBySize retrieves a Text object from the appropriate pool based on content size
type TextBlock ¶
type TextBlock struct {
Texts []Text
MinX float64
MaxX float64
MinY float64
MaxY float64
AvgFontSize float64
}
TextBlock represents a coherent block of text (like a paragraph or column)
func ClusterTextBlocksOptimized ¶
ClusterTextBlocksOptimized uses KD tree optimized text block clustering Optimized version: reduce temporary object allocation, use object pool
type TextBlockStream ¶
TextBlockStream represents a stream of text blocks
type TextClassifier ¶
type TextClassifier struct {
// contains filtered or unexported fields
}
TextClassifier classifies text runs into semantic blocks
func NewTextClassifier ¶
func NewTextClassifier(texts []Text, pageWidth, pageHeight float64) *TextClassifier
NewTextClassifier creates a new text classifier
func (*TextClassifier) ClassifyBlocks ¶
func (tc *TextClassifier) ClassifyBlocks() []ClassifiedBlock
ClassifyBlocks classifies text runs into semantic blocks
type TextEncoding ¶
type TextEncoding interface {
// Decode returns the UTF-8 text corresponding to
// the sequence of code points in raw.
Decode(raw string) (text string)
}
A TextEncoding represents a mapping between font code points and UTF-8 text.
type TextHorizontal ¶
type TextHorizontal []Text
TextHorizontal implements sort.Interface for sorting a slice of Text values in horizontal order, left to right, and then top to bottom within a column.
func (TextHorizontal) Len ¶
func (x TextHorizontal) Len() int
func (TextHorizontal) Less ¶
func (x TextHorizontal) Less(i, j int) bool
func (TextHorizontal) Swap ¶
func (x TextHorizontal) Swap(i, j int)
type TextStream ¶
type TextStream struct {
Text string
PageNum int
Font string
FontSize float64
X, Y float64
W float64
Vertical bool
Confidence float64 // Confidence in the text recognition (0-1)
}
TextStream represents a stream of text with metadata
type TextVertical ¶
type TextVertical []Text
TextVertical implements sort.Interface for sorting a slice of Text values in vertical order, top to bottom, and then left to right within a line.
func (TextVertical) Len ¶
func (x TextVertical) Len() int
func (TextVertical) Less ¶
func (x TextVertical) Less(i, j int) bool
func (TextVertical) Swap ¶
func (x TextVertical) Swap(i, j int)
type TextWithLanguage ¶
type TextWithLanguage struct {
Text Text
Language LanguageInfo
Confidence float64
}
TextWithLanguage represents text with detected language information
type Value ¶
type Value struct {
// contains filtered or unexported fields
}
A Value is a single PDF value, such as an integer, dictionary, or array. The zero Value is a PDF null (Kind() == Null, IsNull() = true).
func (Value) Float64 ¶
Float64 returns v's float64 value, converting from integer if necessary. If v.Kind() != Float64 and v.Kind() != Int64, Float64 returns 0.
func (Value) Index ¶
Index returns the i'th element in the array v. If v.Kind() != Array or if i is outside the array bounds, Index returns a null Value.
func (Value) IsNull ¶
IsNull reports whether the value is a null. It is equivalent to Kind() == Null.
func (Value) Key ¶
Key returns the value associated with the given name key in the dictionary v. Like the result of the Name method, the key should not include a leading slash. If v is a stream, Key applies to the stream's header dictionary. If v.Kind() != Dict and v.Kind() != Stream, Key returns a null Value.
func (Value) Keys ¶
Keys returns a sorted list of the keys in the dictionary v. If v is a stream, Keys applies to the stream's header dictionary. If v.Kind() != Dict and v.Kind() != Stream, Keys returns nil.
func (Value) Name ¶
Name returns v's name value. If v.Kind() != Name, Name returns the empty string. The returned name does not include the leading slash: if v corresponds to the name written using the syntax /Helvetica, Name() == "Helvetica".
func (Value) RawString ¶
RawString returns v's string value. If v.Kind() != String, RawString returns the empty string.
func (Value) Reader ¶
func (v Value) Reader() io.ReadCloser
Reader returns the data contained in the stream v. If v.Kind() != Stream, Reader returns a ReadCloser that responds to all reads with a “stream not present” error.
func (Value) String ¶
String returns a textual representation of the value v. Note that String is not the accessor for values with Kind() == String. To access such values, see RawString, Text, and TextFromUTF16.
func (Value) Text ¶
Text returns v's string value interpreted as a “text string” (defined in the PDF spec) and converted to UTF-8. If v.Kind() != String, Text returns the empty string.
func (Value) TextFromUTF16 ¶
TextFromUTF16 returns v's string value interpreted as big-endian UTF-16 and then converted to UTF-8. If v.Kind() != String or if the data is not valid UTF-16, TextFromUTF16 returns the empty string.
type WSDeque ¶
type WSDeque struct {
// contains filtered or unexported fields
}
5. Work-Stealing Deque (Chase-Lev algorithm)
func NewWSDeque ¶
func (*WSDeque) PushBottom ¶
PushBottom - owner thread pushes from bottom
type WarmupConfig ¶ added in v1.0.2
type WarmupConfig struct {
// BytePoolWarmup number of buffers to warmup for each size bucket
BytePoolWarmup map[int]int
// TextPoolWarmup number of text slices to warmup for each size bucket
TextPoolWarmup map[int]int
// Concurrent whether to warmup concurrently
Concurrent bool
// MaxGoroutines maximum number of concurrent goroutines
MaxGoroutines int
}
WarmupConfig warmup configuration
func AggressiveWarmupConfig ¶ added in v1.0.2
func AggressiveWarmupConfig() *WarmupConfig
AggressiveWarmupConfig returns aggressive warmup configuration (more pre-allocation)
func DefaultWarmupConfig ¶ added in v1.0.2
func DefaultWarmupConfig() *WarmupConfig
DefaultWarmupConfig returns default warmup configuration
func LightWarmupConfig ¶ added in v1.0.2
func LightWarmupConfig() *WarmupConfig
LightWarmupConfig returns light warmup configuration (less pre-allocation)
type WarmupStats ¶ added in v1.0.2
type WarmupStats struct {
BytePoolSizes map[int]int
TextPoolSizes map[int]int
TotalAllocated int64
IsWarmed bool
}
WarmupStats warmup statistics
type WorkStealingExecutor ¶
type WorkStealingExecutor struct {
// contains filtered or unexported fields
}
6. Work-Stealing thread pool
func NewWorkStealingExecutor ¶
func NewWorkStealingExecutor(numWorkers int) *WorkStealingExecutor
func (*WorkStealingExecutor) Start ¶
func (p *WorkStealingExecutor) Start()
func (*WorkStealingExecutor) Stop ¶
func (p *WorkStealingExecutor) Stop()
func (*WorkStealingExecutor) Submit ¶
func (p *WorkStealingExecutor) Submit(task WSTask)
type WorkStealingScheduler ¶
type WorkStealingScheduler struct {
// contains filtered or unexported fields
}
WorkStealingScheduler work stealing scheduler Reduce goroutine creation overhead, improve parallel processing efficiency
func NewWorkStealingScheduler ¶
func NewWorkStealingScheduler(numWorkers int) *WorkStealingScheduler
NewWorkStealingScheduler create work stealing scheduler
func (*WorkStealingScheduler) Start ¶
func (wss *WorkStealingScheduler) Start()
Start start scheduler
func (*WorkStealingScheduler) Submit ¶
func (wss *WorkStealingScheduler) Submit(task Task)
Submit submit task
func (*WorkStealingScheduler) Wait ¶
func (wss *WorkStealingScheduler) Wait()
Wait wait for all tasks to complete
type WorkerPool ¶ added in v1.0.2
type WorkerPool struct {
// contains filtered or unexported fields
}
WorkerPool worker pool
func (*WorkerPool) GetStats ¶ added in v1.0.2
func (wp *WorkerPool) GetStats() WorkerPoolStats
GetStats gets worker pool statistics
type WorkerPoolStats ¶ added in v1.0.2
WorkerPoolStats worker pool statistics
type ZeroCopyBuilder ¶
type ZeroCopyBuilder struct {
// contains filtered or unexported fields
}
2. Zero-copy string builder
func NewZeroCopyBuilder ¶
func NewZeroCopyBuilder(cap int) *ZeroCopyBuilder
func (*ZeroCopyBuilder) Reset ¶
func (b *ZeroCopyBuilder) Reset()
func (*ZeroCopyBuilder) UnsafeString ¶
func (b *ZeroCopyBuilder) UnsafeString() string
UnsafeString Zero-copy return string (note: underlying buffer cannot be modified)
func (*ZeroCopyBuilder) WriteByte ¶
func (b *ZeroCopyBuilder) WriteByte(c byte) error
func (*ZeroCopyBuilder) WriteString ¶
func (b *ZeroCopyBuilder) WriteString(s string)
Notes ¶
Bugs ¶
The package is incomplete, although it has been used successfully on some large real-world PDF files.
The library makes no attempt at efficiency beyond the value cache and font cache. Further optimizations could improve performance for large files.
The support for reading encrypted files is limited to basic RC4 and AES encryption.
Source Files
¶
- adaptive_sort.go
- ascii85.go
- async_io.go
- batch_extract.go
- caching.go
- context_support.go
- enhanced_parallel.go
- errors.go
- extract.go
- extractor.go
- font_cache_global.go
- font_cache_optimized.go
- font_prefetch.go
- lex.go
- metadata.go
- multilang.go
- name.go
- optimization_examples.go
- optimizations_advanced.go
- optimized_extraction.go
- optimized_sorting.go
- page.go
- parallel_processing.go
- performance.go
- pool_sized.go
- pool_warmup.go
- ps.go
- read.go
- sharded_cache.go
- simd_optimized.go
- spatial_index.go
- streaming.go
- text.go
- text_classifier.go
- text_ordering.go
- zero_copy_strings.go
Directories
¶
| Path | Synopsis |
|---|---|
|
cmd
|
|
|
pdfcli
command
|
|
|
test_coords
command
|
|
|
test_ordering
command
|
|
|
examples
|
|
|
batch_fontcache
command
|
|
|
extract
command
Example: Extract text from a PDF file with various methods
|
Example: Extract text from a PDF file with various methods |
|
extract_text_performance
command
|
|
|
performance
command
Example demonstrating performance optimization features
|
Example demonstrating performance optimization features |
|
smart_ordering
command
|
|
|
Pdfpasswd searches for the password for an encrypted PDF by trying all strings over a given alphabet up to a given length.
|
Pdfpasswd searches for the password for an encrypted PDF by trying all strings over a given alphabet up to a given length. |