Documentation
¶
Overview ¶
Package avfaudio provides Go bindings for the AVFAudio framework.
Play, record, and process audio; configure your app’s system audio behavior.
Essentials ¶
- AVFAudio updates: Learn about important changes to AVFAudio.
System audio ¶
- Handling audio interruptions: Observe audio session notifications to ensure that your app responds appropriately to interruptions.
- Responding to audio route changes: Observe audio session notifications to ensure that your app responds appropriately to route changes.
- Routing audio to specific devices in multidevice sessions: Map audio channels to specific devices in multiroute sessions for recording and playback.
- Adding synthesized speech to calls: Provide a more accessible experience by adding your app’s audio to a call.
- Capturing stereo audio from built-In microphones: Configure an iOS device’s built-in microphones to add stereo recording capabilities to your app.
- AVAudioSession: An object that communicates to the system how you intend to use audio in your app. (AVAudioSessionSpatialExperience, AVAudioSessionActivationOptions)
- AVAudioApplication: An object that manages one or more audio sessions that belong to an app.
- AVAudioRoutingArbiter: An object for configuring macOS apps to participate in AirPods Automatic Switching.
Basic playback and recording ¶
- AVAudioPlayer: An object that plays audio data from a file or buffer. (AVAudioPlayerDelegate)
- AVAudioRecorder: An object that records audio data to a file. (AVAudioRecorderDelegate)
- AVMIDIPlayer: An object that plays MIDI data through a system sound module. (AVMIDIPlayerCompletionHandler)
Advanced audio processing ¶
- Audio Engine: Perform advanced real-time and offline audio processing, implement 3D spatialization, and work with MIDI and samplers. (AVAudioEngine, AVAudioNode, AVAudioInputNode, AVAudioOutputNode, AVAudioIONode)
Speech synthesis ¶
- Speech synthesis: Configure voices to speak strings of text. (AVSpeechUtterance, AVSpeechSynthesisVoice, AVSpeechSynthesizer, AVSpeechSynthesisProviderAudioUnit)
Macros ¶
- Macros
Key Types ¶
- AVAudioPlayerNode - An object for scheduling the playback of buffers or segments of audio files.
- AVAudioEngine - An object that manages a graph of audio nodes, controls playback, and configures real-time rendering constraints.
- AVAudioEnvironmentNode - An object that simulates a 3D audio environment.
- AVAudioUnitMIDIInstrument - An object that represents music devices or remote instruments.
- AVAudioPlayer - An object that plays audio data from a file or buffer.
- AVAudioUnitComponent - An object that provides details about an audio unit.
- AVAudioUnitSampler - An object that you configure with one or more instrument samples, based on Apple’s Sampler audio unit.
- AVAudioInputNode - An object that connects to the system’s audio input.
- AVAudioConverter - An object that converts streams of audio between formats.
- AVAudioSequencer - An object that plays audio from a collection of MIDI events the system organizes into music tracks.
Code generated from Apple documentation. DO NOT EDIT.
Index ¶
- Variables
- func NewAVAudioApplicationMicrophoneInjectionPermissionBlock(handler AVAudioApplicationMicrophoneInjectionPermissionHandler) (objc.ID, func())
- func NewAVAudioUnitComponentBlock(handler AVAudioUnitComponentHandler) (objc.ID, func())
- func NewAVAudioUnitErrorBlock(handler AVAudioUnitErrorHandler) (objc.ID, func())
- func NewAVAudioVoiceProcessingSpeechActivityEventBlock(handler AVAudioVoiceProcessingSpeechActivityEventHandler) (objc.ID, func())
- func NewAVSpeechSynthesisPersonalVoiceAuthorizationStatusBlock(handler AVSpeechSynthesisPersonalVoiceAuthorizationStatusHandler) (objc.ID, func())
- func NewBoolBlock(handler BoolHandler) (objc.ID, func())
- func NewBoolErrorBlock(handler BoolErrorHandler) (objc.ID, func())
- func NewErrorBlock(handler ErrorHandler) (objc.ID, func())
- func NewconstAudioBufferListBlock(handler constAudioBufferListHandler) (objc.ID, func())
- type AVAUPresetEvent
- func (a AVAUPresetEvent) Autorelease() AVAUPresetEvent
- func (a AVAUPresetEvent) Element() uint32
- func (a AVAUPresetEvent) Init() AVAUPresetEvent
- func (a AVAUPresetEvent) InitWithScopeElementDictionary(scope uint32, element uint32, presetDictionary foundation.INSDictionary) AVAUPresetEvent
- func (a AVAUPresetEvent) PresetDictionary() foundation.INSDictionary
- func (a AVAUPresetEvent) Scope() uint32
- func (a AVAUPresetEvent) SetElement(value uint32)
- func (a AVAUPresetEvent) SetScope(value uint32)
- type AVAUPresetEventClass
- type AVAudio3DAngularOrientation
- type AVAudio3DMixing
- type AVAudio3DMixingObject
- func (o AVAudio3DMixingObject) BaseObject() objectivec.Object
- func (o AVAudio3DMixingObject) Obstruction() float32
- func (o AVAudio3DMixingObject) Occlusion() float32
- func (o AVAudio3DMixingObject) PointSourceInHeadMode() AVAudio3DMixingPointSourceInHeadMode
- func (o AVAudio3DMixingObject) Position() AVAudio3DPoint
- func (o AVAudio3DMixingObject) Rate() float32
- func (o AVAudio3DMixingObject) RenderingAlgorithm() AVAudio3DMixingRenderingAlgorithm
- func (o AVAudio3DMixingObject) ReverbBlend() float32
- func (o AVAudio3DMixingObject) SetObstruction(value float32)
- func (o AVAudio3DMixingObject) SetOcclusion(value float32)
- func (o AVAudio3DMixingObject) SetPointSourceInHeadMode(value AVAudio3DMixingPointSourceInHeadMode)
- func (o AVAudio3DMixingObject) SetPosition(value AVAudio3DPoint)
- func (o AVAudio3DMixingObject) SetRate(value float32)
- func (o AVAudio3DMixingObject) SetRenderingAlgorithm(value AVAudio3DMixingRenderingAlgorithm)
- func (o AVAudio3DMixingObject) SetReverbBlend(value float32)
- func (o AVAudio3DMixingObject) SetSourceMode(value AVAudio3DMixingSourceMode)
- func (o AVAudio3DMixingObject) SourceMode() AVAudio3DMixingSourceMode
- type AVAudio3DMixingPointSourceInHeadMode
- type AVAudio3DMixingRenderingAlgorithm
- type AVAudio3DMixingSourceMode
- type AVAudio3DPoint
- type AVAudio3DVector
- type AVAudio3DVectorOrientation
- type AVAudioApplication
- func (a AVAudioApplication) Autorelease() AVAudioApplication
- func (a AVAudioApplication) Init() AVAudioApplication
- func (a AVAudioApplication) InputMuted() bool
- func (a AVAudioApplication) RecordPermission() AVAudioApplicationRecordPermission
- func (a AVAudioApplication) SetInputMuteStateChangeHandlerError(inputMuteHandler func(bool) bool) (bool, error)
- func (a AVAudioApplication) SetInputMutedError(muted bool) (bool, error)
- type AVAudioApplicationClass
- func (ac AVAudioApplicationClass) Alloc() AVAudioApplication
- func (ac AVAudioApplicationClass) Class() objc.Class
- func (ac AVAudioApplicationClass) RequestRecordPermission(ctx context.Context) (bool, error)
- func (_AVAudioApplicationClass AVAudioApplicationClass) RequestRecordPermissionWithCompletionHandler(response BoolHandler)
- func (_AVAudioApplicationClass AVAudioApplicationClass) SharedInstance() AVAudioApplication
- type AVAudioApplicationMicrophoneInjectionPermission
- type AVAudioApplicationMicrophoneInjectionPermissionHandler
- type AVAudioApplicationRecordPermission
- type AVAudioBuffer
- type AVAudioBufferClass
- type AVAudioChannelCount
- type AVAudioChannelLayout
- func (a AVAudioChannelLayout) AVChannelLayoutKey() string
- func (a AVAudioChannelLayout) Autorelease() AVAudioChannelLayout
- func (a AVAudioChannelLayout) ChannelCount() AVAudioChannelCount
- func (a AVAudioChannelLayout) EncodeWithCoder(coder foundation.INSCoder)
- func (a AVAudioChannelLayout) Init() AVAudioChannelLayout
- func (a AVAudioChannelLayout) InitWithLayout(layout IAVAudioChannelLayout) AVAudioChannelLayout
- func (a AVAudioChannelLayout) InitWithLayoutTag(layoutTag objectivec.IObject) AVAudioChannelLayout
- func (a AVAudioChannelLayout) Layout() IAVAudioChannelLayout
- func (a AVAudioChannelLayout) LayoutTag() objectivec.IObject
- type AVAudioChannelLayoutClass
- func (ac AVAudioChannelLayoutClass) Alloc() AVAudioChannelLayout
- func (ac AVAudioChannelLayoutClass) Class() objc.Class
- func (_AVAudioChannelLayoutClass AVAudioChannelLayoutClass) LayoutWithLayout(layout IAVAudioChannelLayout) AVAudioChannelLayout
- func (_AVAudioChannelLayoutClass AVAudioChannelLayoutClass) LayoutWithLayoutTag(layoutTag objectivec.IObject) AVAudioChannelLayout
- type AVAudioCommonFormat
- type AVAudioCompressedBuffer
- func AVAudioCompressedBufferFromID(id objc.ID) AVAudioCompressedBuffer
- func NewAVAudioCompressedBuffer() AVAudioCompressedBuffer
- func NewAudioCompressedBufferWithFormatPacketCapacity(format IAVAudioFormat, packetCapacity AVAudioPacketCount) AVAudioCompressedBuffer
- func NewAudioCompressedBufferWithFormatPacketCapacityMaximumPacketSize(format IAVAudioFormat, packetCapacity AVAudioPacketCount, ...) AVAudioCompressedBuffer
- func (a AVAudioCompressedBuffer) Autorelease() AVAudioCompressedBuffer
- func (a AVAudioCompressedBuffer) ByteCapacity() uint32
- func (a AVAudioCompressedBuffer) ByteLength() uint32
- func (a AVAudioCompressedBuffer) Data() unsafe.Pointer
- func (a AVAudioCompressedBuffer) Init() AVAudioCompressedBuffer
- func (a AVAudioCompressedBuffer) InitWithFormatPacketCapacity(format IAVAudioFormat, packetCapacity AVAudioPacketCount) AVAudioCompressedBuffer
- func (a AVAudioCompressedBuffer) InitWithFormatPacketCapacityMaximumPacketSize(format IAVAudioFormat, packetCapacity AVAudioPacketCount, ...) AVAudioCompressedBuffer
- func (a AVAudioCompressedBuffer) MaximumPacketSize() int
- func (a AVAudioCompressedBuffer) PacketCapacity() AVAudioPacketCount
- func (a AVAudioCompressedBuffer) PacketCount() AVAudioPacketCount
- func (a AVAudioCompressedBuffer) PacketDependencies() objectivec.IObject
- func (a AVAudioCompressedBuffer) PacketDescriptions() objectivec.IObject
- func (a AVAudioCompressedBuffer) SetByteLength(value uint32)
- func (a AVAudioCompressedBuffer) SetPacketCount(value AVAudioPacketCount)
- type AVAudioCompressedBufferClass
- type AVAudioConnectionPoint
- func (a AVAudioConnectionPoint) Autorelease() AVAudioConnectionPoint
- func (a AVAudioConnectionPoint) Bus() AVAudioNodeBus
- func (a AVAudioConnectionPoint) Init() AVAudioConnectionPoint
- func (a AVAudioConnectionPoint) InitWithNodeBus(node IAVAudioNode, bus AVAudioNodeBus) AVAudioConnectionPoint
- func (a AVAudioConnectionPoint) InputConnectionPointForNodeInputBus(node IAVAudioNode, bus AVAudioNodeBus) IAVAudioConnectionPoint
- func (a AVAudioConnectionPoint) Node() IAVAudioNode
- func (a AVAudioConnectionPoint) OutputConnectionPointsForNodeOutputBus(node IAVAudioNode, bus AVAudioNodeBus) []AVAudioConnectionPoint
- type AVAudioConnectionPointClass
- type AVAudioContentSource
- type AVAudioConverter
- func (a AVAudioConverter) ApplicableEncodeBitRates() []foundation.NSNumber
- func (a AVAudioConverter) ApplicableEncodeSampleRates() []foundation.NSNumber
- func (a AVAudioConverter) AudioSyncPacketFrequency() int
- func (a AVAudioConverter) Autorelease() AVAudioConverter
- func (a AVAudioConverter) AvailableEncodeBitRates() []foundation.NSNumber
- func (a AVAudioConverter) AvailableEncodeChannelLayoutTags() []foundation.NSNumber
- func (a AVAudioConverter) AvailableEncodeSampleRates() []foundation.NSNumber
- func (a AVAudioConverter) BitRate() int
- func (a AVAudioConverter) BitRateStrategy() string
- func (a AVAudioConverter) ChannelMap() []foundation.NSNumber
- func (a AVAudioConverter) ContentSource() AVAudioContentSource
- func (a AVAudioConverter) ConvertToBufferErrorWithInputFromBlock(outputBuffer IAVAudioBuffer, outError foundation.INSError, ...) AVAudioConverterOutputStatus
- func (a AVAudioConverter) ConvertToBufferFromBufferError(outputBuffer IAVAudioPCMBuffer, inputBuffer IAVAudioPCMBuffer) (bool, error)
- func (a AVAudioConverter) Dither() bool
- func (a AVAudioConverter) Downmix() bool
- func (a AVAudioConverter) DynamicRangeControlConfiguration() AVAudioDynamicRangeControlConfiguration
- func (a AVAudioConverter) Init() AVAudioConverter
- func (a AVAudioConverter) InitFromFormatToFormat(fromFormat IAVAudioFormat, toFormat IAVAudioFormat) AVAudioConverter
- func (a AVAudioConverter) InputFormat() IAVAudioFormat
- func (a AVAudioConverter) MagicCookie() foundation.INSData
- func (a AVAudioConverter) MaximumOutputPacketSize() int
- func (a AVAudioConverter) OutputFormat() IAVAudioFormat
- func (a AVAudioConverter) PrimeInfo() AVAudioConverterPrimeInfo
- func (a AVAudioConverter) PrimeMethod() AVAudioConverterPrimeMethod
- func (a AVAudioConverter) Reset()
- func (a AVAudioConverter) SampleRateConverterAlgorithm() string
- func (a AVAudioConverter) SampleRateConverterQuality() int
- func (a AVAudioConverter) SetAudioSyncPacketFrequency(value int)
- func (a AVAudioConverter) SetBitRate(value int)
- func (a AVAudioConverter) SetBitRateStrategy(value string)
- func (a AVAudioConverter) SetChannelMap(value []foundation.NSNumber)
- func (a AVAudioConverter) SetContentSource(value AVAudioContentSource)
- func (a AVAudioConverter) SetDither(value bool)
- func (a AVAudioConverter) SetDownmix(value bool)
- func (a AVAudioConverter) SetDynamicRangeControlConfiguration(value AVAudioDynamicRangeControlConfiguration)
- func (a AVAudioConverter) SetMagicCookie(value foundation.INSData)
- func (a AVAudioConverter) SetPrimeInfo(value AVAudioConverterPrimeInfo)
- func (a AVAudioConverter) SetPrimeMethod(value AVAudioConverterPrimeMethod)
- func (a AVAudioConverter) SetSampleRateConverterAlgorithm(value string)
- func (a AVAudioConverter) SetSampleRateConverterQuality(value int)
- type AVAudioConverterClass
- type AVAudioConverterInputBlock
- type AVAudioConverterInputStatus
- type AVAudioConverterOutputStatus
- type AVAudioConverterPrimeInfo
- type AVAudioConverterPrimeMethod
- type AVAudioDynamicRangeControlConfiguration
- type AVAudioEngine
- func (a AVAudioEngine) AttachNode(node IAVAudioNode)
- func (a AVAudioEngine) AttachedNodes() foundation.INSSet
- func (a AVAudioEngine) AutoShutdownEnabled() bool
- func (a AVAudioEngine) Autorelease() AVAudioEngine
- func (a AVAudioEngine) ConnectMIDIToFormatEventListBlock(sourceNode IAVAudioNode, destinationNode IAVAudioNode, format IAVAudioFormat, ...)
- func (a AVAudioEngine) ConnectMIDIToNodesFormatEventListBlock(sourceNode IAVAudioNode, destinationNodes []AVAudioNode, format IAVAudioFormat, ...)
- func (a AVAudioEngine) ConnectToConnectionPointsFromBusFormat(sourceNode IAVAudioNode, destNodes []AVAudioConnectionPoint, ...)
- func (a AVAudioEngine) ConnectToFormat(node1 IAVAudioNode, node2 IAVAudioNode, format IAVAudioFormat)
- func (a AVAudioEngine) ConnectToFromBusToBusFormat(node1 IAVAudioNode, node2 IAVAudioNode, bus1 AVAudioNodeBus, ...)
- func (a AVAudioEngine) DetachNode(node IAVAudioNode)
- func (a AVAudioEngine) DisableManualRenderingMode()
- func (a AVAudioEngine) DisconnectMIDIFrom(sourceNode IAVAudioNode, destinationNode IAVAudioNode)
- func (a AVAudioEngine) DisconnectMIDIFromNodes(sourceNode IAVAudioNode, destinationNodes []AVAudioNode)
- func (a AVAudioEngine) DisconnectMIDIInput(node IAVAudioNode)
- func (a AVAudioEngine) DisconnectMIDIOutput(node IAVAudioNode)
- func (a AVAudioEngine) DisconnectNodeInput(node IAVAudioNode)
- func (a AVAudioEngine) DisconnectNodeInputBus(node IAVAudioNode, bus AVAudioNodeBus)
- func (a AVAudioEngine) DisconnectNodeOutput(node IAVAudioNode)
- func (a AVAudioEngine) DisconnectNodeOutputBus(node IAVAudioNode, bus AVAudioNodeBus)
- func (a AVAudioEngine) EnableManualRenderingModeFormatMaximumFrameCountError(mode AVAudioEngineManualRenderingMode, pcmFormat IAVAudioFormat, ...) (bool, error)
- func (a AVAudioEngine) Init() AVAudioEngine
- func (a AVAudioEngine) InputConnectionPointForNodeInputBus(node IAVAudioNode, bus AVAudioNodeBus) IAVAudioConnectionPoint
- func (a AVAudioEngine) InputNode() IAVAudioInputNode
- func (a AVAudioEngine) IsInManualRenderingMode() bool
- func (a AVAudioEngine) MainMixerNode() IAVAudioMixerNode
- func (a AVAudioEngine) ManualRenderingBlock() AVAudioEngineManualRenderingBlock
- func (a AVAudioEngine) ManualRenderingFormat() IAVAudioFormat
- func (a AVAudioEngine) ManualRenderingMaximumFrameCount() AVAudioFrameCount
- func (a AVAudioEngine) ManualRenderingMode() AVAudioEngineManualRenderingMode
- func (a AVAudioEngine) ManualRenderingSampleTime() AVAudioFramePosition
- func (a AVAudioEngine) MusicSequence() objectivec.IObject
- func (a AVAudioEngine) OutputConnectionPointsForNodeOutputBus(node IAVAudioNode, bus AVAudioNodeBus) []AVAudioConnectionPoint
- func (a AVAudioEngine) OutputNode() IAVAudioOutputNode
- func (a AVAudioEngine) Pause()
- func (a AVAudioEngine) Prepare()
- func (a AVAudioEngine) RenderOfflineToBufferError(numberOfFrames AVAudioFrameCount, buffer IAVAudioPCMBuffer) (AVAudioEngineManualRenderingStatus, error)
- func (a AVAudioEngine) Reset()
- func (a AVAudioEngine) Running() bool
- func (a AVAudioEngine) SetAutoShutdownEnabled(value bool)
- func (a AVAudioEngine) SetMusicSequence(value objectivec.IObject)
- func (a AVAudioEngine) StartAndReturnError() (bool, error)
- func (a AVAudioEngine) Stop()
- type AVAudioEngineClass
- type AVAudioEngineManualRenderingBlock
- type AVAudioEngineManualRenderingError
- type AVAudioEngineManualRenderingMode
- type AVAudioEngineManualRenderingStatus
- type AVAudioEnvironmentDistanceAttenuationModel
- type AVAudioEnvironmentDistanceAttenuationParameters
- func (a AVAudioEnvironmentDistanceAttenuationParameters) Autorelease() AVAudioEnvironmentDistanceAttenuationParameters
- func (a AVAudioEnvironmentDistanceAttenuationParameters) DistanceAttenuationModel() AVAudioEnvironmentDistanceAttenuationModel
- func (a AVAudioEnvironmentDistanceAttenuationParameters) Init() AVAudioEnvironmentDistanceAttenuationParameters
- func (a AVAudioEnvironmentDistanceAttenuationParameters) MaximumDistance() float32
- func (a AVAudioEnvironmentDistanceAttenuationParameters) ReferenceDistance() float32
- func (a AVAudioEnvironmentDistanceAttenuationParameters) RolloffFactor() float32
- func (a AVAudioEnvironmentDistanceAttenuationParameters) SetDistanceAttenuationModel(value AVAudioEnvironmentDistanceAttenuationModel)
- func (a AVAudioEnvironmentDistanceAttenuationParameters) SetMaximumDistance(value float32)
- func (a AVAudioEnvironmentDistanceAttenuationParameters) SetReferenceDistance(value float32)
- func (a AVAudioEnvironmentDistanceAttenuationParameters) SetRolloffFactor(value float32)
- type AVAudioEnvironmentDistanceAttenuationParametersClass
- type AVAudioEnvironmentNode
- func (a AVAudioEnvironmentNode) ApplicableRenderingAlgorithms() []foundation.NSNumber
- func (a AVAudioEnvironmentNode) Autorelease() AVAudioEnvironmentNode
- func (a AVAudioEnvironmentNode) DestinationForMixerBus(mixer IAVAudioNode, bus AVAudioNodeBus) IAVAudioMixingDestination
- func (a AVAudioEnvironmentNode) DistanceAttenuationParameters() IAVAudioEnvironmentDistanceAttenuationParameters
- func (a AVAudioEnvironmentNode) Init() AVAudioEnvironmentNode
- func (a AVAudioEnvironmentNode) KAudioChannelLayoutTag_AudioUnit_4() objectivec.IObject
- func (a AVAudioEnvironmentNode) KAudioChannelLayoutTag_AudioUnit_5_0() objectivec.IObject
- func (a AVAudioEnvironmentNode) KAudioChannelLayoutTag_AudioUnit_6_0() objectivec.IObject
- func (a AVAudioEnvironmentNode) KAudioChannelLayoutTag_AudioUnit_7_0() objectivec.IObject
- func (a AVAudioEnvironmentNode) KAudioChannelLayoutTag_AudioUnit_7_0_Front() objectivec.IObject
- func (a AVAudioEnvironmentNode) KAudioChannelLayoutTag_AudioUnit_8() objectivec.IObject
- func (a AVAudioEnvironmentNode) ListenerAngularOrientation() AVAudio3DAngularOrientation
- func (a AVAudioEnvironmentNode) ListenerHeadTrackingEnabled() bool
- func (a AVAudioEnvironmentNode) ListenerPosition() AVAudio3DPoint
- func (a AVAudioEnvironmentNode) ListenerVectorOrientation() AVAudio3DVectorOrientation
- func (a AVAudioEnvironmentNode) NextAvailableInputBus() AVAudioNodeBus
- func (a AVAudioEnvironmentNode) Obstruction() float32
- func (a AVAudioEnvironmentNode) Occlusion() float32
- func (a AVAudioEnvironmentNode) OutputType() AVAudioEnvironmentOutputType
- func (a AVAudioEnvironmentNode) OutputVolume() float32
- func (a AVAudioEnvironmentNode) Pan() float32
- func (a AVAudioEnvironmentNode) PointSourceInHeadMode() AVAudio3DMixingPointSourceInHeadMode
- func (a AVAudioEnvironmentNode) Position() AVAudio3DPoint
- func (a AVAudioEnvironmentNode) Rate() float32
- func (a AVAudioEnvironmentNode) RenderingAlgorithm() AVAudio3DMixingRenderingAlgorithm
- func (a AVAudioEnvironmentNode) ReverbBlend() float32
- func (a AVAudioEnvironmentNode) ReverbParameters() IAVAudioEnvironmentReverbParameters
- func (a AVAudioEnvironmentNode) SetKAudioChannelLayoutTag_AudioUnit_4(value objectivec.IObject)
- func (a AVAudioEnvironmentNode) SetKAudioChannelLayoutTag_AudioUnit_5_0(value objectivec.IObject)
- func (a AVAudioEnvironmentNode) SetKAudioChannelLayoutTag_AudioUnit_6_0(value objectivec.IObject)
- func (a AVAudioEnvironmentNode) SetKAudioChannelLayoutTag_AudioUnit_7_0(value objectivec.IObject)
- func (a AVAudioEnvironmentNode) SetKAudioChannelLayoutTag_AudioUnit_7_0_Front(value objectivec.IObject)
- func (a AVAudioEnvironmentNode) SetKAudioChannelLayoutTag_AudioUnit_8(value objectivec.IObject)
- func (a AVAudioEnvironmentNode) SetListenerAngularOrientation(value AVAudio3DAngularOrientation)
- func (a AVAudioEnvironmentNode) SetListenerHeadTrackingEnabled(value bool)
- func (a AVAudioEnvironmentNode) SetListenerPosition(value AVAudio3DPoint)
- func (a AVAudioEnvironmentNode) SetListenerVectorOrientation(value AVAudio3DVectorOrientation)
- func (a AVAudioEnvironmentNode) SetObstruction(value float32)
- func (a AVAudioEnvironmentNode) SetOcclusion(value float32)
- func (a AVAudioEnvironmentNode) SetOutputType(value AVAudioEnvironmentOutputType)
- func (a AVAudioEnvironmentNode) SetOutputVolume(value float32)
- func (a AVAudioEnvironmentNode) SetPan(value float32)
- func (a AVAudioEnvironmentNode) SetPointSourceInHeadMode(value AVAudio3DMixingPointSourceInHeadMode)
- func (a AVAudioEnvironmentNode) SetPosition(value AVAudio3DPoint)
- func (a AVAudioEnvironmentNode) SetRate(value float32)
- func (a AVAudioEnvironmentNode) SetRenderingAlgorithm(value AVAudio3DMixingRenderingAlgorithm)
- func (a AVAudioEnvironmentNode) SetReverbBlend(value float32)
- func (a AVAudioEnvironmentNode) SetSourceMode(value AVAudio3DMixingSourceMode)
- func (a AVAudioEnvironmentNode) SetVolume(value float32)
- func (a AVAudioEnvironmentNode) SourceMode() AVAudio3DMixingSourceMode
- func (a AVAudioEnvironmentNode) Volume() float32
- type AVAudioEnvironmentNodeClass
- type AVAudioEnvironmentOutputType
- type AVAudioEnvironmentReverbParameters
- func (a AVAudioEnvironmentReverbParameters) Autorelease() AVAudioEnvironmentReverbParameters
- func (a AVAudioEnvironmentReverbParameters) Enable() bool
- func (a AVAudioEnvironmentReverbParameters) FilterParameters() IAVAudioUnitEQFilterParameters
- func (a AVAudioEnvironmentReverbParameters) Init() AVAudioEnvironmentReverbParameters
- func (a AVAudioEnvironmentReverbParameters) Level() float32
- func (a AVAudioEnvironmentReverbParameters) LoadFactoryReverbPreset(preset AVAudioUnitReverbPreset)
- func (a AVAudioEnvironmentReverbParameters) SetEnable(value bool)
- func (a AVAudioEnvironmentReverbParameters) SetLevel(value float32)
- type AVAudioEnvironmentReverbParametersClass
- type AVAudioFile
- func AVAudioFileFromID(id objc.ID) AVAudioFile
- func NewAVAudioFile() AVAudioFile
- func NewAudioFileForReadingCommonFormatInterleavedError(fileURL foundation.INSURL, format AVAudioCommonFormat, interleaved bool) (AVAudioFile, error)
- func NewAudioFileForReadingError(fileURL foundation.INSURL) (AVAudioFile, error)
- func NewAudioFileForWritingSettingsCommonFormatInterleavedError(fileURL foundation.INSURL, settings foundation.INSDictionary, ...) (AVAudioFile, error)
- func NewAudioFileForWritingSettingsError(fileURL foundation.INSURL, settings foundation.INSDictionary) (AVAudioFile, error)
- func (a AVAudioFile) AVAudioFileTypeKey() string
- func (a AVAudioFile) Autorelease() AVAudioFile
- func (a AVAudioFile) Close()
- func (a AVAudioFile) FileFormat() IAVAudioFormat
- func (a AVAudioFile) FramePosition() AVAudioFramePosition
- func (a AVAudioFile) Init() AVAudioFile
- func (a AVAudioFile) InitForReadingCommonFormatInterleavedError(fileURL foundation.INSURL, format AVAudioCommonFormat, interleaved bool) (AVAudioFile, error)
- func (a AVAudioFile) InitForReadingError(fileURL foundation.INSURL) (AVAudioFile, error)
- func (a AVAudioFile) InitForWritingSettingsCommonFormatInterleavedError(fileURL foundation.INSURL, settings foundation.INSDictionary, ...) (AVAudioFile, error)
- func (a AVAudioFile) InitForWritingSettingsError(fileURL foundation.INSURL, settings foundation.INSDictionary) (AVAudioFile, error)
- func (a AVAudioFile) IsOpen() bool
- func (a AVAudioFile) Length() AVAudioFramePosition
- func (a AVAudioFile) ProcessingFormat() IAVAudioFormat
- func (a AVAudioFile) ReadIntoBufferError(buffer IAVAudioPCMBuffer) (bool, error)
- func (a AVAudioFile) ReadIntoBufferFrameCountError(buffer IAVAudioPCMBuffer, frames AVAudioFrameCount) (bool, error)
- func (a AVAudioFile) SetFramePosition(value AVAudioFramePosition)
- func (a AVAudioFile) Url() foundation.INSURL
- func (a AVAudioFile) WriteFromBufferError(buffer IAVAudioPCMBuffer) (bool, error)
- type AVAudioFileClass
- type AVAudioFormat
- func AVAudioFormatFromID(id objc.ID) AVAudioFormat
- func NewAVAudioFormat() AVAudioFormat
- func NewAudioFormatStandardFormatWithSampleRateChannelLayout(sampleRate float64, layout IAVAudioChannelLayout) AVAudioFormat
- func NewAudioFormatStandardFormatWithSampleRateChannels(sampleRate float64, channels AVAudioChannelCount) AVAudioFormat
- func NewAudioFormatWithCMAudioFormatDescription(formatDescription coremedia.CMFormatDescriptionRef) AVAudioFormat
- func NewAudioFormatWithCommonFormatSampleRateChannelsInterleaved(format AVAudioCommonFormat, sampleRate float64, channels AVAudioChannelCount, ...) AVAudioFormat
- func NewAudioFormatWithCommonFormatSampleRateInterleavedChannelLayout(format AVAudioCommonFormat, sampleRate float64, interleaved bool, ...) AVAudioFormat
- func NewAudioFormatWithSettings(settings foundation.INSDictionary) AVAudioFormat
- func NewAudioFormatWithStreamDescription(asbd objectivec.IObject) AVAudioFormat
- func NewAudioFormatWithStreamDescriptionChannelLayout(asbd objectivec.IObject, layout IAVAudioChannelLayout) AVAudioFormat
- func (a AVAudioFormat) AVChannelLayoutKey() string
- func (a AVAudioFormat) Autorelease() AVAudioFormat
- func (a AVAudioFormat) ChannelCount() AVAudioChannelCount
- func (a AVAudioFormat) ChannelLayout() IAVAudioChannelLayout
- func (a AVAudioFormat) CommonFormat() AVAudioCommonFormat
- func (a AVAudioFormat) EncodeWithCoder(coder foundation.INSCoder)
- func (a AVAudioFormat) FormatDescription() coremedia.CMFormatDescriptionRef
- func (a AVAudioFormat) Init() AVAudioFormat
- func (a AVAudioFormat) InitStandardFormatWithSampleRateChannelLayout(sampleRate float64, layout IAVAudioChannelLayout) AVAudioFormat
- func (a AVAudioFormat) InitStandardFormatWithSampleRateChannels(sampleRate float64, channels AVAudioChannelCount) AVAudioFormat
- func (a AVAudioFormat) InitWithCMAudioFormatDescription(formatDescription coremedia.CMFormatDescriptionRef) AVAudioFormat
- func (a AVAudioFormat) InitWithCommonFormatSampleRateChannelsInterleaved(format AVAudioCommonFormat, sampleRate float64, channels AVAudioChannelCount, ...) AVAudioFormat
- func (a AVAudioFormat) InitWithCommonFormatSampleRateInterleavedChannelLayout(format AVAudioCommonFormat, sampleRate float64, interleaved bool, ...) AVAudioFormat
- func (a AVAudioFormat) InitWithSettings(settings foundation.INSDictionary) AVAudioFormat
- func (a AVAudioFormat) InitWithStreamDescription(asbd objectivec.IObject) AVAudioFormat
- func (a AVAudioFormat) InitWithStreamDescriptionChannelLayout(asbd objectivec.IObject, layout IAVAudioChannelLayout) AVAudioFormat
- func (a AVAudioFormat) Interleaved() bool
- func (a AVAudioFormat) MagicCookie() foundation.INSData
- func (a AVAudioFormat) SampleRate() float64
- func (a AVAudioFormat) SetMagicCookie(value foundation.INSData)
- func (a AVAudioFormat) Settings() foundation.INSDictionary
- func (a AVAudioFormat) Standard() bool
- func (a AVAudioFormat) StreamDescription() objectivec.IObject
- type AVAudioFormatClass
- type AVAudioFrameCount
- type AVAudioFramePosition
- type AVAudioIONode
- func (a AVAudioIONode) AudioUnit() IAVAudioUnit
- func (a AVAudioIONode) Autorelease() AVAudioIONode
- func (a AVAudioIONode) Init() AVAudioIONode
- func (a AVAudioIONode) PresentationLatency() float64
- func (a AVAudioIONode) SetVoiceProcessingEnabledError(enabled bool) (bool, error)
- func (a AVAudioIONode) VoiceProcessingEnabled() bool
- type AVAudioIONodeClass
- type AVAudioIONodeInputBlock
- type AVAudioInputNode
- func (a AVAudioInputNode) Autorelease() AVAudioInputNode
- func (a AVAudioInputNode) DestinationForMixerBus(mixer IAVAudioNode, bus AVAudioNodeBus) IAVAudioMixingDestination
- func (a AVAudioInputNode) Init() AVAudioInputNode
- func (a AVAudioInputNode) Obstruction() float32
- func (a AVAudioInputNode) Occlusion() float32
- func (a AVAudioInputNode) Pan() float32
- func (a AVAudioInputNode) PointSourceInHeadMode() AVAudio3DMixingPointSourceInHeadMode
- func (a AVAudioInputNode) Position() AVAudio3DPoint
- func (a AVAudioInputNode) Rate() float32
- func (a AVAudioInputNode) RenderingAlgorithm() AVAudio3DMixingRenderingAlgorithm
- func (a AVAudioInputNode) ReverbBlend() float32
- func (a AVAudioInputNode) SetManualRenderingInputPCMFormatInputBlock(format IAVAudioFormat, block AVAudioIONodeInputBlock) bool
- func (a AVAudioInputNode) SetMutedSpeechActivityEventListener(listenerBlock AVAudioVoiceProcessingSpeechActivityEventHandler) bool
- func (a AVAudioInputNode) SetMutedSpeechActivityEventListenerSync(ctx context.Context) (AVAudioVoiceProcessingSpeechActivityEvent, error)
- func (a AVAudioInputNode) SetObstruction(value float32)
- func (a AVAudioInputNode) SetOcclusion(value float32)
- func (a AVAudioInputNode) SetPan(value float32)
- func (a AVAudioInputNode) SetPointSourceInHeadMode(value AVAudio3DMixingPointSourceInHeadMode)
- func (a AVAudioInputNode) SetPosition(value AVAudio3DPoint)
- func (a AVAudioInputNode) SetRate(value float32)
- func (a AVAudioInputNode) SetRenderingAlgorithm(value AVAudio3DMixingRenderingAlgorithm)
- func (a AVAudioInputNode) SetReverbBlend(value float32)
- func (a AVAudioInputNode) SetSourceMode(value AVAudio3DMixingSourceMode)
- func (a AVAudioInputNode) SetVoiceProcessingAGCEnabled(value bool)
- func (a AVAudioInputNode) SetVoiceProcessingBypassed(value bool)
- func (a AVAudioInputNode) SetVoiceProcessingInputMuted(value bool)
- func (a AVAudioInputNode) SetVoiceProcessingOtherAudioDuckingConfiguration(value AVAudioVoiceProcessingOtherAudioDuckingConfiguration)
- func (a AVAudioInputNode) SetVolume(value float32)
- func (a AVAudioInputNode) SourceMode() AVAudio3DMixingSourceMode
- func (a AVAudioInputNode) VoiceProcessingAGCEnabled() bool
- func (a AVAudioInputNode) VoiceProcessingBypassed() bool
- func (a AVAudioInputNode) VoiceProcessingInputMuted() bool
- func (a AVAudioInputNode) VoiceProcessingOtherAudioDuckingConfiguration() AVAudioVoiceProcessingOtherAudioDuckingConfiguration
- func (a AVAudioInputNode) Volume() float32
- type AVAudioInputNodeClass
- type AVAudioMixerNode
- func (a AVAudioMixerNode) Autorelease() AVAudioMixerNode
- func (a AVAudioMixerNode) DestinationForMixerBus(mixer IAVAudioNode, bus AVAudioNodeBus) IAVAudioMixingDestination
- func (a AVAudioMixerNode) Init() AVAudioMixerNode
- func (a AVAudioMixerNode) NextAvailableInputBus() AVAudioNodeBus
- func (a AVAudioMixerNode) Obstruction() float32
- func (a AVAudioMixerNode) Occlusion() float32
- func (a AVAudioMixerNode) OutputVolume() float32
- func (a AVAudioMixerNode) Pan() float32
- func (a AVAudioMixerNode) PointSourceInHeadMode() AVAudio3DMixingPointSourceInHeadMode
- func (a AVAudioMixerNode) Position() AVAudio3DPoint
- func (a AVAudioMixerNode) Rate() float32
- func (a AVAudioMixerNode) RenderingAlgorithm() AVAudio3DMixingRenderingAlgorithm
- func (a AVAudioMixerNode) ReverbBlend() float32
- func (a AVAudioMixerNode) SetObstruction(value float32)
- func (a AVAudioMixerNode) SetOcclusion(value float32)
- func (a AVAudioMixerNode) SetOutputVolume(value float32)
- func (a AVAudioMixerNode) SetPan(value float32)
- func (a AVAudioMixerNode) SetPointSourceInHeadMode(value AVAudio3DMixingPointSourceInHeadMode)
- func (a AVAudioMixerNode) SetPosition(value AVAudio3DPoint)
- func (a AVAudioMixerNode) SetRate(value float32)
- func (a AVAudioMixerNode) SetRenderingAlgorithm(value AVAudio3DMixingRenderingAlgorithm)
- func (a AVAudioMixerNode) SetReverbBlend(value float32)
- func (a AVAudioMixerNode) SetSourceMode(value AVAudio3DMixingSourceMode)
- func (a AVAudioMixerNode) SetVolume(value float32)
- func (a AVAudioMixerNode) SourceMode() AVAudio3DMixingSourceMode
- func (a AVAudioMixerNode) Volume() float32
- type AVAudioMixerNodeClass
- type AVAudioMixing
- type AVAudioMixingDestination
- func (a AVAudioMixingDestination) Autorelease() AVAudioMixingDestination
- func (a AVAudioMixingDestination) ConnectionPoint() IAVAudioConnectionPoint
- func (a AVAudioMixingDestination) DestinationForMixerBus(mixer IAVAudioNode, bus AVAudioNodeBus) IAVAudioMixingDestination
- func (a AVAudioMixingDestination) Init() AVAudioMixingDestination
- func (a AVAudioMixingDestination) Obstruction() float32
- func (a AVAudioMixingDestination) Occlusion() float32
- func (a AVAudioMixingDestination) Pan() float32
- func (a AVAudioMixingDestination) PointSourceInHeadMode() AVAudio3DMixingPointSourceInHeadMode
- func (a AVAudioMixingDestination) Position() AVAudio3DPoint
- func (a AVAudioMixingDestination) Rate() float32
- func (a AVAudioMixingDestination) RenderingAlgorithm() AVAudio3DMixingRenderingAlgorithm
- func (a AVAudioMixingDestination) ReverbBlend() float32
- func (a AVAudioMixingDestination) SetObstruction(value float32)
- func (a AVAudioMixingDestination) SetOcclusion(value float32)
- func (a AVAudioMixingDestination) SetPan(value float32)
- func (a AVAudioMixingDestination) SetPointSourceInHeadMode(value AVAudio3DMixingPointSourceInHeadMode)
- func (a AVAudioMixingDestination) SetPosition(value AVAudio3DPoint)
- func (a AVAudioMixingDestination) SetRate(value float32)
- func (a AVAudioMixingDestination) SetRenderingAlgorithm(value AVAudio3DMixingRenderingAlgorithm)
- func (a AVAudioMixingDestination) SetReverbBlend(value float32)
- func (a AVAudioMixingDestination) SetSourceMode(value AVAudio3DMixingSourceMode)
- func (a AVAudioMixingDestination) SetVolume(value float32)
- func (a AVAudioMixingDestination) SourceMode() AVAudio3DMixingSourceMode
- func (a AVAudioMixingDestination) Volume() float32
- type AVAudioMixingDestinationClass
- type AVAudioMixingObject
- func (o AVAudioMixingObject) BaseObject() objectivec.Object
- func (o AVAudioMixingObject) DestinationForMixerBus(mixer IAVAudioNode, bus AVAudioNodeBus) IAVAudioMixingDestination
- func (o AVAudioMixingObject) Obstruction() float32
- func (o AVAudioMixingObject) Occlusion() float32
- func (o AVAudioMixingObject) Pan() float32
- func (o AVAudioMixingObject) PointSourceInHeadMode() AVAudio3DMixingPointSourceInHeadMode
- func (o AVAudioMixingObject) Position() AVAudio3DPoint
- func (o AVAudioMixingObject) Rate() float32
- func (o AVAudioMixingObject) RenderingAlgorithm() AVAudio3DMixingRenderingAlgorithm
- func (o AVAudioMixingObject) ReverbBlend() float32
- func (o AVAudioMixingObject) SetObstruction(value float32)
- func (o AVAudioMixingObject) SetOcclusion(value float32)
- func (o AVAudioMixingObject) SetPan(value float32)
- func (o AVAudioMixingObject) SetPointSourceInHeadMode(value AVAudio3DMixingPointSourceInHeadMode)
- func (o AVAudioMixingObject) SetPosition(value AVAudio3DPoint)
- func (o AVAudioMixingObject) SetRate(value float32)
- func (o AVAudioMixingObject) SetRenderingAlgorithm(value AVAudio3DMixingRenderingAlgorithm)
- func (o AVAudioMixingObject) SetReverbBlend(value float32)
- func (o AVAudioMixingObject) SetSourceMode(value AVAudio3DMixingSourceMode)
- func (o AVAudioMixingObject) SetVolume(value float32)
- func (o AVAudioMixingObject) SourceMode() AVAudio3DMixingSourceMode
- func (o AVAudioMixingObject) Volume() float32
- type AVAudioNode
- func (a AVAudioNode) AUAudioUnit() objectivec.IObject
- func (a AVAudioNode) Autorelease() AVAudioNode
- func (a AVAudioNode) Engine() IAVAudioEngine
- func (a AVAudioNode) Init() AVAudioNode
- func (a AVAudioNode) InputFormatForBus(bus AVAudioNodeBus) IAVAudioFormat
- func (a AVAudioNode) InstallTapOnBusBufferSizeFormatBlock(bus AVAudioNodeBus, bufferSize AVAudioFrameCount, format IAVAudioFormat, ...)
- func (a AVAudioNode) LastRenderTime() IAVAudioTime
- func (a AVAudioNode) Latency() float64
- func (a AVAudioNode) NameForInputBus(bus AVAudioNodeBus) string
- func (a AVAudioNode) NameForOutputBus(bus AVAudioNodeBus) string
- func (a AVAudioNode) NumberOfInputs() uint
- func (a AVAudioNode) NumberOfOutputs() uint
- func (a AVAudioNode) OutputFormatForBus(bus AVAudioNodeBus) IAVAudioFormat
- func (a AVAudioNode) OutputPresentationLatency() float64
- func (a AVAudioNode) RemoveTapOnBus(bus AVAudioNodeBus)
- func (a AVAudioNode) Reset()
- type AVAudioNodeBus
- type AVAudioNodeClass
- type AVAudioNodeCompletionHandler
- type AVAudioNodeTapBlock
- type AVAudioOutputNode
- func (a AVAudioOutputNode) Autorelease() AVAudioOutputNode
- func (a AVAudioOutputNode) Init() AVAudioOutputNode
- func (a AVAudioOutputNode) IntendedSpatialExperience() objectivec.IObject
- func (a AVAudioOutputNode) ManualRenderingFormat() IAVAudioFormat
- func (a AVAudioOutputNode) SetIntendedSpatialExperience(value objectivec.IObject)
- func (a AVAudioOutputNode) SetManualRenderingFormat(value IAVAudioFormat)
- type AVAudioOutputNodeClass
- type AVAudioPCMBuffer
- func (a AVAudioPCMBuffer) Autorelease() AVAudioPCMBuffer
- func (a AVAudioPCMBuffer) FloatChannelData() unsafe.Pointer
- func (a AVAudioPCMBuffer) FrameCapacity() AVAudioFrameCount
- func (a AVAudioPCMBuffer) FrameLength() AVAudioFrameCount
- func (a AVAudioPCMBuffer) Init() AVAudioPCMBuffer
- func (a AVAudioPCMBuffer) InitWithPCMFormatBufferListNoCopyDeallocator(format IAVAudioFormat, bufferList objectivec.IObject, ...) AVAudioPCMBuffer
- func (a AVAudioPCMBuffer) InitWithPCMFormatBufferListNoCopyDeallocatorSync(ctx context.Context, format IAVAudioFormat, bufferList objectivec.IObject) (*objectivec.Object, error)
- func (a AVAudioPCMBuffer) InitWithPCMFormatFrameCapacity(format IAVAudioFormat, frameCapacity AVAudioFrameCount) AVAudioPCMBuffer
- func (a AVAudioPCMBuffer) Int16ChannelData() unsafe.Pointer
- func (a AVAudioPCMBuffer) Int32ChannelData() unsafe.Pointer
- func (a AVAudioPCMBuffer) SetFrameLength(value AVAudioFrameCount)
- func (a AVAudioPCMBuffer) Stride() uint
- type AVAudioPCMBufferClass
- type AVAudioPacketCount
- type AVAudioPlayer
- func AVAudioPlayerFromID(id objc.ID) AVAudioPlayer
- func NewAVAudioPlayer() AVAudioPlayer
- func NewAudioPlayerWithContentsOfURLError(url foundation.INSURL) (AVAudioPlayer, error)
- func NewAudioPlayerWithContentsOfURLFileTypeHintError(url foundation.INSURL, utiString string) (AVAudioPlayer, error)
- func NewAudioPlayerWithDataError(data foundation.INSData) (AVAudioPlayer, error)
- func NewAudioPlayerWithDataFileTypeHintError(data foundation.INSData, utiString string) (AVAudioPlayer, error)
- func (a AVAudioPlayer) Autorelease() AVAudioPlayer
- func (a AVAudioPlayer) AveragePowerForChannel(channelNumber uint) float32
- func (a AVAudioPlayer) CurrentDevice() string
- func (a AVAudioPlayer) CurrentTime() float64
- func (a AVAudioPlayer) Data() foundation.INSData
- func (a AVAudioPlayer) Delegate() AVAudioPlayerDelegate
- func (a AVAudioPlayer) DeviceCurrentTime() float64
- func (a AVAudioPlayer) Duration() float64
- func (a AVAudioPlayer) EnableRate() bool
- func (a AVAudioPlayer) Format() IAVAudioFormat
- func (a AVAudioPlayer) Init() AVAudioPlayer
- func (a AVAudioPlayer) InitWithContentsOfURLError(url foundation.INSURL) (AVAudioPlayer, error)
- func (a AVAudioPlayer) InitWithContentsOfURLFileTypeHintError(url foundation.INSURL, utiString string) (AVAudioPlayer, error)
- func (a AVAudioPlayer) InitWithDataError(data foundation.INSData) (AVAudioPlayer, error)
- func (a AVAudioPlayer) InitWithDataFileTypeHintError(data foundation.INSData, utiString string) (AVAudioPlayer, error)
- func (a AVAudioPlayer) IntendedSpatialExperience() objectivec.IObject
- func (a AVAudioPlayer) MeteringEnabled() bool
- func (a AVAudioPlayer) NumberOfChannels() uint
- func (a AVAudioPlayer) NumberOfLoops() int
- func (a AVAudioPlayer) Pan() float32
- func (a AVAudioPlayer) Pause()
- func (a AVAudioPlayer) PeakPowerForChannel(channelNumber uint) float32
- func (a AVAudioPlayer) Play() bool
- func (a AVAudioPlayer) PlayAtTime(time float64) bool
- func (a AVAudioPlayer) Playing() bool
- func (a AVAudioPlayer) PrepareToPlay() bool
- func (a AVAudioPlayer) Rate() float32
- func (a AVAudioPlayer) SetCurrentDevice(value string)
- func (a AVAudioPlayer) SetCurrentTime(value float64)
- func (a AVAudioPlayer) SetDelegate(value AVAudioPlayerDelegate)
- func (a AVAudioPlayer) SetEnableRate(value bool)
- func (a AVAudioPlayer) SetIntendedSpatialExperience(value objectivec.IObject)
- func (a AVAudioPlayer) SetMeteringEnabled(value bool)
- func (a AVAudioPlayer) SetNumberOfLoops(value int)
- func (a AVAudioPlayer) SetPan(value float32)
- func (a AVAudioPlayer) SetRate(value float32)
- func (a AVAudioPlayer) SetVolume(value float32)
- func (a AVAudioPlayer) SetVolumeFadeDuration(volume float32, duration float64)
- func (a AVAudioPlayer) Settings() foundation.INSDictionary
- func (a AVAudioPlayer) Stop()
- func (a AVAudioPlayer) UpdateMeters()
- func (a AVAudioPlayer) Url() foundation.INSURL
- func (a AVAudioPlayer) Volume() float32
- type AVAudioPlayerClass
- type AVAudioPlayerDelegate
- type AVAudioPlayerDelegateConfig
- type AVAudioPlayerDelegateObject
- func (o AVAudioPlayerDelegateObject) AudioPlayerDecodeErrorDidOccurError(player IAVAudioPlayer, error_ foundation.INSError)
- func (o AVAudioPlayerDelegateObject) AudioPlayerDidFinishPlayingSuccessfully(player IAVAudioPlayer, flag bool)
- func (o AVAudioPlayerDelegateObject) BaseObject() objectivec.Object
- type AVAudioPlayerNode
- func (a AVAudioPlayerNode) Autorelease() AVAudioPlayerNode
- func (a AVAudioPlayerNode) DestinationForMixerBus(mixer IAVAudioNode, bus AVAudioNodeBus) IAVAudioMixingDestination
- func (a AVAudioPlayerNode) Init() AVAudioPlayerNode
- func (a AVAudioPlayerNode) NodeTimeForPlayerTime(playerTime IAVAudioTime) IAVAudioTime
- func (a AVAudioPlayerNode) Obstruction() float32
- func (a AVAudioPlayerNode) Occlusion() float32
- func (a AVAudioPlayerNode) Pan() float32
- func (a AVAudioPlayerNode) Pause()
- func (a AVAudioPlayerNode) Play()
- func (a AVAudioPlayerNode) PlayAtTime(when IAVAudioTime)
- func (a AVAudioPlayerNode) PlayerTimeForNodeTime(nodeTime IAVAudioTime) IAVAudioTime
- func (a AVAudioPlayerNode) Playing() bool
- func (a AVAudioPlayerNode) PointSourceInHeadMode() AVAudio3DMixingPointSourceInHeadMode
- func (a AVAudioPlayerNode) Position() AVAudio3DPoint
- func (a AVAudioPlayerNode) PrepareWithFrameCount(frameCount AVAudioFrameCount)
- func (a AVAudioPlayerNode) Rate() float32
- func (a AVAudioPlayerNode) RenderingAlgorithm() AVAudio3DMixingRenderingAlgorithm
- func (a AVAudioPlayerNode) ReverbBlend() float32
- func (a AVAudioPlayerNode) ScheduleBufferAtTimeOptionsCompletionCallbackTypeCompletionHandler(buffer IAVAudioPCMBuffer, when IAVAudioTime, ...)
- func (a AVAudioPlayerNode) ScheduleBufferAtTimeOptionsCompletionHandler(buffer IAVAudioPCMBuffer, when IAVAudioTime, ...)
- func (a AVAudioPlayerNode) ScheduleBufferCompletionCallbackTypeCompletionHandler(buffer IAVAudioPCMBuffer, callbackType AVAudioPlayerNodeCompletionCallbackType, ...)
- func (a AVAudioPlayerNode) ScheduleBufferCompletionHandler(buffer IAVAudioPCMBuffer, completionHandler ErrorHandler)
- func (a AVAudioPlayerNode) ScheduleFileAtTimeCompletionCallbackTypeCompletionHandler(file IAVAudioFile, when IAVAudioTime, ...)
- func (a AVAudioPlayerNode) ScheduleFileAtTimeCompletionHandler(file IAVAudioFile, when IAVAudioTime, completionHandler ErrorHandler)
- func (a AVAudioPlayerNode) ScheduleSegmentStartingFrameFrameCountAtTimeCompletionCallbackTypeCompletionHandler(file IAVAudioFile, startFrame AVAudioFramePosition, ...)
- func (a AVAudioPlayerNode) ScheduleSegmentStartingFrameFrameCountAtTimeCompletionHandler(file IAVAudioFile, startFrame AVAudioFramePosition, ...)
- func (a AVAudioPlayerNode) SetObstruction(value float32)
- func (a AVAudioPlayerNode) SetOcclusion(value float32)
- func (a AVAudioPlayerNode) SetPan(value float32)
- func (a AVAudioPlayerNode) SetPointSourceInHeadMode(value AVAudio3DMixingPointSourceInHeadMode)
- func (a AVAudioPlayerNode) SetPosition(value AVAudio3DPoint)
- func (a AVAudioPlayerNode) SetRate(value float32)
- func (a AVAudioPlayerNode) SetRenderingAlgorithm(value AVAudio3DMixingRenderingAlgorithm)
- func (a AVAudioPlayerNode) SetReverbBlend(value float32)
- func (a AVAudioPlayerNode) SetSourceMode(value AVAudio3DMixingSourceMode)
- func (a AVAudioPlayerNode) SetVolume(value float32)
- func (a AVAudioPlayerNode) SourceMode() AVAudio3DMixingSourceMode
- func (a AVAudioPlayerNode) Stop()
- func (a AVAudioPlayerNode) Volume() float32
- type AVAudioPlayerNodeBufferOptions
- type AVAudioPlayerNodeClass
- type AVAudioPlayerNodeCompletionCallbackType
- type AVAudioPlayerNodeCompletionHandler
- type AVAudioQuality
- type AVAudioRecorder
- func AVAudioRecorderFromID(id objc.ID) AVAudioRecorder
- func NewAVAudioRecorder() AVAudioRecorder
- func NewAudioRecorderWithURLFormatError(url foundation.INSURL, format IAVAudioFormat) (AVAudioRecorder, error)
- func NewAudioRecorderWithURLSettingsError(url foundation.INSURL, settings foundation.INSDictionary) (AVAudioRecorder, error)
- func (a AVAudioRecorder) Autorelease() AVAudioRecorder
- func (a AVAudioRecorder) AveragePowerForChannel(channelNumber uint) float32
- func (a AVAudioRecorder) CurrentTime() float64
- func (a AVAudioRecorder) Delegate() AVAudioRecorderDelegate
- func (a AVAudioRecorder) DeleteRecording() bool
- func (a AVAudioRecorder) DeviceCurrentTime() float64
- func (a AVAudioRecorder) Format() IAVAudioFormat
- func (a AVAudioRecorder) Init() AVAudioRecorder
- func (a AVAudioRecorder) InitWithURLFormatError(url foundation.INSURL, format IAVAudioFormat) (AVAudioRecorder, error)
- func (a AVAudioRecorder) InitWithURLSettingsError(url foundation.INSURL, settings foundation.INSDictionary) (AVAudioRecorder, error)
- func (a AVAudioRecorder) MeteringEnabled() bool
- func (a AVAudioRecorder) Pause()
- func (a AVAudioRecorder) PeakPowerForChannel(channelNumber uint) float32
- func (a AVAudioRecorder) PlayAndRecord() objc.ID
- func (a AVAudioRecorder) PrepareToRecord() bool
- func (a AVAudioRecorder) Record() objc.ID
- func (a AVAudioRecorder) RecordAtTime(time float64) bool
- func (a AVAudioRecorder) RecordAtTimeForDuration(time float64, duration float64) bool
- func (a AVAudioRecorder) RecordForDuration(duration float64) bool
- func (a AVAudioRecorder) Recording() bool
- func (a AVAudioRecorder) SetDelegate(value AVAudioRecorderDelegate)
- func (a AVAudioRecorder) SetMeteringEnabled(value bool)
- func (a AVAudioRecorder) Settings() foundation.INSDictionary
- func (a AVAudioRecorder) Stop()
- func (a AVAudioRecorder) UpdateMeters()
- func (a AVAudioRecorder) Url() foundation.INSURL
- type AVAudioRecorderClass
- type AVAudioRecorderDelegate
- type AVAudioRecorderDelegateConfig
- type AVAudioRecorderDelegateObject
- func (o AVAudioRecorderDelegateObject) AudioRecorderDidFinishRecordingSuccessfully(recorder IAVAudioRecorder, flag bool)
- func (o AVAudioRecorderDelegateObject) AudioRecorderEncodeErrorDidOccurError(recorder IAVAudioRecorder, error_ foundation.INSError)
- func (o AVAudioRecorderDelegateObject) BaseObject() objectivec.Object
- type AVAudioRoutingArbiter
- func (a AVAudioRoutingArbiter) Autorelease() AVAudioRoutingArbiter
- func (a AVAudioRoutingArbiter) BeginArbitrationWithCategory(ctx context.Context, category AVAudioRoutingArbitrationCategory) (bool, error)
- func (a AVAudioRoutingArbiter) BeginArbitrationWithCategoryCompletionHandler(category AVAudioRoutingArbitrationCategory, handler BoolErrorHandler)
- func (a AVAudioRoutingArbiter) Init() AVAudioRoutingArbiter
- func (a AVAudioRoutingArbiter) LeaveArbitration()
- type AVAudioRoutingArbiterClass
- type AVAudioRoutingArbitrationCategory
- type AVAudioSequencer
- func (a AVAudioSequencer) AVMusicTimeStampEndOfTrack() float64
- func (a AVAudioSequencer) Autorelease() AVAudioSequencer
- func (a AVAudioSequencer) BeatsForHostTimeError(inHostTime uint64) (AVMusicTimeStamp, error)
- func (a AVAudioSequencer) BeatsForSeconds(seconds float64) AVMusicTimeStamp
- func (a AVAudioSequencer) CreateAndAppendTrack() IAVMusicTrack
- func (a AVAudioSequencer) CurrentPositionInBeats() float64
- func (a AVAudioSequencer) CurrentPositionInSeconds() float64
- func (a AVAudioSequencer) DataWithSMPTEResolutionError(SMPTEResolution int) (foundation.INSData, error)
- func (a AVAudioSequencer) HostTimeForBeatsError(inBeats AVMusicTimeStamp) (uint64, error)
- func (a AVAudioSequencer) Init() AVAudioSequencer
- func (a AVAudioSequencer) InitWithAudioEngine(engine IAVAudioEngine) AVAudioSequencer
- func (a AVAudioSequencer) LoadFromDataOptionsError(data foundation.INSData, options AVMusicSequenceLoadOptions) (bool, error)
- func (a AVAudioSequencer) LoadFromURLOptionsError(fileURL foundation.INSURL, options AVMusicSequenceLoadOptions) (bool, error)
- func (a AVAudioSequencer) Playing() bool
- func (a AVAudioSequencer) PrepareToPlay()
- func (a AVAudioSequencer) Rate() float32
- func (a AVAudioSequencer) RemoveTrack(track IAVMusicTrack) bool
- func (a AVAudioSequencer) ReverseEvents()
- func (a AVAudioSequencer) SecondsForBeats(beats AVMusicTimeStamp) float64
- func (a AVAudioSequencer) SetAVMusicTimeStampEndOfTrack(value float64)
- func (a AVAudioSequencer) SetCurrentPositionInBeats(value float64)
- func (a AVAudioSequencer) SetCurrentPositionInSeconds(value float64)
- func (a AVAudioSequencer) SetRate(value float32)
- func (a AVAudioSequencer) SetUserCallback(userCallback AVAudioSequencerUserCallback)
- func (a AVAudioSequencer) StartAndReturnError() (bool, error)
- func (a AVAudioSequencer) Stop()
- func (a AVAudioSequencer) TempoTrack() IAVMusicTrack
- func (a AVAudioSequencer) Tracks() []AVMusicTrack
- func (a AVAudioSequencer) UserInfo() foundation.INSDictionary
- func (a AVAudioSequencer) WriteToURLSMPTEResolutionReplaceExistingError(fileURL foundation.INSURL, resolution int, replace bool) (bool, error)
- type AVAudioSequencerClass
- type AVAudioSequencerInfoDictionaryKey
- type AVAudioSequencerUserCallback
- type AVAudioSessionActivationOptions
- type AVAudioSessionAnchoringStrategy
- type AVAudioSessionCapability
- func (a AVAudioSessionCapability) Autorelease() AVAudioSessionCapability
- func (a AVAudioSessionCapability) BluetoothMicrophoneExtension() objc.ID
- func (a AVAudioSessionCapability) Enabled() bool
- func (a AVAudioSessionCapability) Init() AVAudioSessionCapability
- func (a AVAudioSessionCapability) SetBluetoothMicrophoneExtension(value objc.ID)
- func (a AVAudioSessionCapability) Supported() bool
- type AVAudioSessionCapabilityClass
- type AVAudioSessionCategoryOptions
- type AVAudioSessionErrorCode
- type AVAudioSessionIOType
- type AVAudioSessionInterruptionOptions
- type AVAudioSessionInterruptionReason
- type AVAudioSessionInterruptionType
- type AVAudioSessionMicrophoneInjectionMode
- type AVAudioSessionPortOverride
- type AVAudioSessionPromptStyle
- type AVAudioSessionRecordPermission
- type AVAudioSessionRenderingMode
- type AVAudioSessionRouteChangeReason
- type AVAudioSessionRouteSelection
- type AVAudioSessionRouteSharingPolicy
- type AVAudioSessionSetActiveOptions
- type AVAudioSessionSilenceSecondaryAudioHintType
- type AVAudioSessionSoundStageSize
- type AVAudioSessionSpatialExperience
- type AVAudioSinkNode
- type AVAudioSinkNodeClass
- type AVAudioSinkNodeReceiverBlock
- type AVAudioSourceNode
- func AVAudioSourceNodeFromID(id objc.ID) AVAudioSourceNode
- func NewAVAudioSourceNode() AVAudioSourceNode
- func NewAudioSourceNodeWithFormatRenderBlock(format IAVAudioFormat, block AVAudioSourceNodeRenderBlock) AVAudioSourceNode
- func NewAudioSourceNodeWithRenderBlock(block AVAudioSourceNodeRenderBlock) AVAudioSourceNode
- func (a AVAudioSourceNode) Autorelease() AVAudioSourceNode
- func (a AVAudioSourceNode) DestinationForMixerBus(mixer IAVAudioNode, bus AVAudioNodeBus) IAVAudioMixingDestination
- func (a AVAudioSourceNode) Init() AVAudioSourceNode
- func (a AVAudioSourceNode) InitWithFormatRenderBlock(format IAVAudioFormat, block AVAudioSourceNodeRenderBlock) AVAudioSourceNode
- func (a AVAudioSourceNode) InitWithRenderBlock(block AVAudioSourceNodeRenderBlock) AVAudioSourceNode
- func (a AVAudioSourceNode) Obstruction() float32
- func (a AVAudioSourceNode) Occlusion() float32
- func (a AVAudioSourceNode) Pan() float32
- func (a AVAudioSourceNode) PointSourceInHeadMode() AVAudio3DMixingPointSourceInHeadMode
- func (a AVAudioSourceNode) Position() AVAudio3DPoint
- func (a AVAudioSourceNode) Rate() float32
- func (a AVAudioSourceNode) RenderingAlgorithm() AVAudio3DMixingRenderingAlgorithm
- func (a AVAudioSourceNode) ReverbBlend() float32
- func (a AVAudioSourceNode) SetObstruction(value float32)
- func (a AVAudioSourceNode) SetOcclusion(value float32)
- func (a AVAudioSourceNode) SetPan(value float32)
- func (a AVAudioSourceNode) SetPointSourceInHeadMode(value AVAudio3DMixingPointSourceInHeadMode)
- func (a AVAudioSourceNode) SetPosition(value AVAudio3DPoint)
- func (a AVAudioSourceNode) SetRate(value float32)
- func (a AVAudioSourceNode) SetRenderingAlgorithm(value AVAudio3DMixingRenderingAlgorithm)
- func (a AVAudioSourceNode) SetReverbBlend(value float32)
- func (a AVAudioSourceNode) SetSourceMode(value AVAudio3DMixingSourceMode)
- func (a AVAudioSourceNode) SetVolume(value float32)
- func (a AVAudioSourceNode) SourceMode() AVAudio3DMixingSourceMode
- func (a AVAudioSourceNode) Volume() float32
- type AVAudioSourceNodeClass
- type AVAudioSourceNodeRenderBlock
- type AVAudioStereoMixing
- type AVAudioStereoMixingObject
- type AVAudioStereoOrientation
- type AVAudioTime
- func AVAudioTimeFromID(id objc.ID) AVAudioTime
- func NewAVAudioTime() AVAudioTime
- func NewAudioTimeWithAudioTimeStampSampleRate(ts objectivec.IObject, sampleRate float64) AVAudioTime
- func NewAudioTimeWithHostTime(hostTime uint64) AVAudioTime
- func NewAudioTimeWithHostTimeSampleTimeAtRate(hostTime uint64, sampleTime AVAudioFramePosition, sampleRate float64) AVAudioTime
- func NewAudioTimeWithSampleTimeAtRate(sampleTime AVAudioFramePosition, sampleRate float64) AVAudioTime
- func (a AVAudioTime) AudioTimeStamp() objectivec.IObject
- func (a AVAudioTime) Autorelease() AVAudioTime
- func (a AVAudioTime) ExtrapolateTimeFromAnchor(anchorTime IAVAudioTime) IAVAudioTime
- func (a AVAudioTime) HostTime() uint64
- func (a AVAudioTime) HostTimeValid() bool
- func (a AVAudioTime) Init() AVAudioTime
- func (a AVAudioTime) InitWithAudioTimeStampSampleRate(ts objectivec.IObject, sampleRate float64) AVAudioTime
- func (a AVAudioTime) InitWithHostTime(hostTime uint64) AVAudioTime
- func (a AVAudioTime) InitWithHostTimeSampleTimeAtRate(hostTime uint64, sampleTime AVAudioFramePosition, sampleRate float64) AVAudioTime
- func (a AVAudioTime) InitWithSampleTimeAtRate(sampleTime AVAudioFramePosition, sampleRate float64) AVAudioTime
- func (a AVAudioTime) SampleRate() float64
- func (a AVAudioTime) SampleTime() AVAudioFramePosition
- func (a AVAudioTime) SampleTimeValid() bool
- type AVAudioTimeClass
- func (ac AVAudioTimeClass) Alloc() AVAudioTime
- func (ac AVAudioTimeClass) Class() objc.Class
- func (_AVAudioTimeClass AVAudioTimeClass) HostTimeForSeconds(seconds float64) uint64
- func (_AVAudioTimeClass AVAudioTimeClass) SecondsForHostTime(hostTime uint64) float64
- func (_AVAudioTimeClass AVAudioTimeClass) TimeWithAudioTimeStampSampleRate(ts objectivec.IObject, sampleRate float64) AVAudioTime
- func (_AVAudioTimeClass AVAudioTimeClass) TimeWithHostTime(hostTime uint64) AVAudioTime
- func (_AVAudioTimeClass AVAudioTimeClass) TimeWithHostTimeSampleTimeAtRate(hostTime uint64, sampleTime AVAudioFramePosition, sampleRate float64) AVAudioTime
- func (_AVAudioTimeClass AVAudioTimeClass) TimeWithSampleTimeAtRate(sampleTime AVAudioFramePosition, sampleRate float64) AVAudioTime
- type AVAudioUnit
- func (a AVAudioUnit) AudioComponentDescription() objectivec.IObject
- func (a AVAudioUnit) AudioUnit() IAVAudioUnit
- func (a AVAudioUnit) Autorelease() AVAudioUnit
- func (a AVAudioUnit) Init() AVAudioUnit
- func (a AVAudioUnit) LoadAudioUnitPresetAtURLError(url foundation.INSURL) (bool, error)
- func (a AVAudioUnit) ManufacturerName() string
- func (a AVAudioUnit) Name() string
- func (a AVAudioUnit) Version() uint
- type AVAudioUnitClass
- func (ac AVAudioUnitClass) Alloc() AVAudioUnit
- func (ac AVAudioUnitClass) Class() objc.Class
- func (ac AVAudioUnitClass) InstantiateWithComponentDescriptionOptions(ctx context.Context, audioComponentDescription objectivec.IObject, ...) (*AVAudioUnit, error)
- func (_AVAudioUnitClass AVAudioUnitClass) InstantiateWithComponentDescriptionOptionsCompletionHandler(audioComponentDescription objectivec.IObject, options objectivec.IObject, ...)
- type AVAudioUnitComponent
- func (a AVAudioUnitComponent) AVAudioUnitManufacturerNameApple() string
- func (a AVAudioUnitComponent) AVAudioUnitTypeEffect() string
- func (a AVAudioUnitComponent) AVAudioUnitTypeFormatConverter() string
- func (a AVAudioUnitComponent) AVAudioUnitTypeGenerator() string
- func (a AVAudioUnitComponent) AVAudioUnitTypeMIDIProcessor() string
- func (a AVAudioUnitComponent) AVAudioUnitTypeMixer() string
- func (a AVAudioUnitComponent) AVAudioUnitTypeMusicDevice() string
- func (a AVAudioUnitComponent) AVAudioUnitTypeMusicEffect() string
- func (a AVAudioUnitComponent) AVAudioUnitTypeOfflineEffect() string
- func (a AVAudioUnitComponent) AVAudioUnitTypeOutput() string
- func (a AVAudioUnitComponent) AVAudioUnitTypePanner() string
- func (a AVAudioUnitComponent) AllTagNames() []string
- func (a AVAudioUnitComponent) AudioComponent() objectivec.IObject
- func (a AVAudioUnitComponent) AudioComponentDescription() objectivec.IObject
- func (a AVAudioUnitComponent) Autorelease() AVAudioUnitComponent
- func (a AVAudioUnitComponent) AvailableArchitectures() []foundation.NSNumber
- func (a AVAudioUnitComponent) ConfigurationDictionary() foundation.INSDictionary
- func (a AVAudioUnitComponent) HasCustomView() bool
- func (a AVAudioUnitComponent) HasMIDIInput() bool
- func (a AVAudioUnitComponent) HasMIDIOutput() bool
- func (a AVAudioUnitComponent) Icon() objc.ID
- func (a AVAudioUnitComponent) IconURL() foundation.INSURL
- func (a AVAudioUnitComponent) Init() AVAudioUnitComponent
- func (a AVAudioUnitComponent) LocalizedTypeName() string
- func (a AVAudioUnitComponent) ManufacturerName() string
- func (a AVAudioUnitComponent) Name() string
- func (a AVAudioUnitComponent) PassesAUVal() bool
- func (a AVAudioUnitComponent) SandboxSafe() bool
- func (a AVAudioUnitComponent) SetUserTagNames(value []string)
- func (a AVAudioUnitComponent) SupportsNumberInputChannelsOutputChannels(numInputChannels int, numOutputChannels int) bool
- func (a AVAudioUnitComponent) TypeName() string
- func (a AVAudioUnitComponent) UserTagNames() []string
- func (a AVAudioUnitComponent) Version() uint
- func (a AVAudioUnitComponent) VersionString() string
- type AVAudioUnitComponentClass
- type AVAudioUnitComponentHandler
- type AVAudioUnitComponentManager
- func (a AVAudioUnitComponentManager) Autorelease() AVAudioUnitComponentManager
- func (a AVAudioUnitComponentManager) ComponentsMatchingDescription(desc objectivec.IObject) []AVAudioUnitComponent
- func (a AVAudioUnitComponentManager) ComponentsMatchingPredicate(predicate foundation.INSPredicate) []AVAudioUnitComponent
- func (a AVAudioUnitComponentManager) ComponentsPassingTest(testHandler AVAudioUnitComponentHandler) []AVAudioUnitComponent
- func (a AVAudioUnitComponentManager) Init() AVAudioUnitComponentManager
- func (a AVAudioUnitComponentManager) StandardLocalizedTagNames() []string
- func (a AVAudioUnitComponentManager) TagNames() []string
- type AVAudioUnitComponentManagerClass
- type AVAudioUnitDelay
- func (a AVAudioUnitDelay) Autorelease() AVAudioUnitDelay
- func (a AVAudioUnitDelay) DelayTime() float64
- func (a AVAudioUnitDelay) Feedback() float32
- func (a AVAudioUnitDelay) Init() AVAudioUnitDelay
- func (a AVAudioUnitDelay) LowPassCutoff() float32
- func (a AVAudioUnitDelay) SetDelayTime(value float64)
- func (a AVAudioUnitDelay) SetFeedback(value float32)
- func (a AVAudioUnitDelay) SetLowPassCutoff(value float32)
- func (a AVAudioUnitDelay) SetWetDryMix(value float32)
- func (a AVAudioUnitDelay) WetDryMix() float32
- type AVAudioUnitDelayClass
- type AVAudioUnitDistortion
- func (a AVAudioUnitDistortion) Autorelease() AVAudioUnitDistortion
- func (a AVAudioUnitDistortion) Init() AVAudioUnitDistortion
- func (a AVAudioUnitDistortion) LoadFactoryPreset(preset AVAudioUnitDistortionPreset)
- func (a AVAudioUnitDistortion) PreGain() float32
- func (a AVAudioUnitDistortion) SetPreGain(value float32)
- func (a AVAudioUnitDistortion) SetWetDryMix(value float32)
- func (a AVAudioUnitDistortion) WetDryMix() float32
- type AVAudioUnitDistortionClass
- type AVAudioUnitDistortionPreset
- type AVAudioUnitEQ
- func (a AVAudioUnitEQ) Autorelease() AVAudioUnitEQ
- func (a AVAudioUnitEQ) Bands() []AVAudioUnitEQFilterParameters
- func (a AVAudioUnitEQ) GlobalGain() float32
- func (a AVAudioUnitEQ) Init() AVAudioUnitEQ
- func (a AVAudioUnitEQ) InitWithNumberOfBands(numberOfBands uint) AVAudioUnitEQ
- func (a AVAudioUnitEQ) SetGlobalGain(value float32)
- type AVAudioUnitEQClass
- type AVAudioUnitEQFilterParameters
- func (a AVAudioUnitEQFilterParameters) Autorelease() AVAudioUnitEQFilterParameters
- func (a AVAudioUnitEQFilterParameters) Bands() IAVAudioUnitEQFilterParameters
- func (a AVAudioUnitEQFilterParameters) Bandwidth() float32
- func (a AVAudioUnitEQFilterParameters) Bypass() bool
- func (a AVAudioUnitEQFilterParameters) FilterType() AVAudioUnitEQFilterType
- func (a AVAudioUnitEQFilterParameters) Frequency() float32
- func (a AVAudioUnitEQFilterParameters) Gain() float32
- func (a AVAudioUnitEQFilterParameters) GlobalGain() float32
- func (a AVAudioUnitEQFilterParameters) Init() AVAudioUnitEQFilterParameters
- func (a AVAudioUnitEQFilterParameters) SetBands(value IAVAudioUnitEQFilterParameters)
- func (a AVAudioUnitEQFilterParameters) SetBandwidth(value float32)
- func (a AVAudioUnitEQFilterParameters) SetBypass(value bool)
- func (a AVAudioUnitEQFilterParameters) SetFilterType(value AVAudioUnitEQFilterType)
- func (a AVAudioUnitEQFilterParameters) SetFrequency(value float32)
- func (a AVAudioUnitEQFilterParameters) SetGain(value float32)
- func (a AVAudioUnitEQFilterParameters) SetGlobalGain(value float32)
- type AVAudioUnitEQFilterParametersClass
- type AVAudioUnitEQFilterType
- type AVAudioUnitEffect
- func (a AVAudioUnitEffect) Autorelease() AVAudioUnitEffect
- func (a AVAudioUnitEffect) Bypass() bool
- func (a AVAudioUnitEffect) Init() AVAudioUnitEffect
- func (a AVAudioUnitEffect) InitWithAudioComponentDescription(audioComponentDescription objectivec.IObject) AVAudioUnitEffect
- func (a AVAudioUnitEffect) SetBypass(value bool)
- type AVAudioUnitEffectClass
- type AVAudioUnitErrorHandler
- type AVAudioUnitGenerator
- func (a AVAudioUnitGenerator) Autorelease() AVAudioUnitGenerator
- func (a AVAudioUnitGenerator) Bypass() bool
- func (a AVAudioUnitGenerator) DestinationForMixerBus(mixer IAVAudioNode, bus AVAudioNodeBus) IAVAudioMixingDestination
- func (a AVAudioUnitGenerator) Init() AVAudioUnitGenerator
- func (a AVAudioUnitGenerator) InitWithAudioComponentDescription(audioComponentDescription objectivec.IObject) AVAudioUnitGenerator
- func (a AVAudioUnitGenerator) Obstruction() float32
- func (a AVAudioUnitGenerator) Occlusion() float32
- func (a AVAudioUnitGenerator) Pan() float32
- func (a AVAudioUnitGenerator) PointSourceInHeadMode() AVAudio3DMixingPointSourceInHeadMode
- func (a AVAudioUnitGenerator) Position() AVAudio3DPoint
- func (a AVAudioUnitGenerator) Rate() float32
- func (a AVAudioUnitGenerator) RenderingAlgorithm() AVAudio3DMixingRenderingAlgorithm
- func (a AVAudioUnitGenerator) ReverbBlend() float32
- func (a AVAudioUnitGenerator) SetBypass(value bool)
- func (a AVAudioUnitGenerator) SetObstruction(value float32)
- func (a AVAudioUnitGenerator) SetOcclusion(value float32)
- func (a AVAudioUnitGenerator) SetPan(value float32)
- func (a AVAudioUnitGenerator) SetPointSourceInHeadMode(value AVAudio3DMixingPointSourceInHeadMode)
- func (a AVAudioUnitGenerator) SetPosition(value AVAudio3DPoint)
- func (a AVAudioUnitGenerator) SetRate(value float32)
- func (a AVAudioUnitGenerator) SetRenderingAlgorithm(value AVAudio3DMixingRenderingAlgorithm)
- func (a AVAudioUnitGenerator) SetReverbBlend(value float32)
- func (a AVAudioUnitGenerator) SetSourceMode(value AVAudio3DMixingSourceMode)
- func (a AVAudioUnitGenerator) SetVolume(value float32)
- func (a AVAudioUnitGenerator) SourceMode() AVAudio3DMixingSourceMode
- func (a AVAudioUnitGenerator) Volume() float32
- type AVAudioUnitGeneratorClass
- type AVAudioUnitMIDIInstrument
- func (a AVAudioUnitMIDIInstrument) Autorelease() AVAudioUnitMIDIInstrument
- func (a AVAudioUnitMIDIInstrument) DestinationForMixerBus(mixer IAVAudioNode, bus AVAudioNodeBus) IAVAudioMixingDestination
- func (a AVAudioUnitMIDIInstrument) Init() AVAudioUnitMIDIInstrument
- func (a AVAudioUnitMIDIInstrument) InitWithAudioComponentDescription(description objectivec.IObject) AVAudioUnitMIDIInstrument
- func (a AVAudioUnitMIDIInstrument) Obstruction() float32
- func (a AVAudioUnitMIDIInstrument) Occlusion() float32
- func (a AVAudioUnitMIDIInstrument) Pan() float32
- func (a AVAudioUnitMIDIInstrument) PointSourceInHeadMode() AVAudio3DMixingPointSourceInHeadMode
- func (a AVAudioUnitMIDIInstrument) Position() AVAudio3DPoint
- func (a AVAudioUnitMIDIInstrument) Rate() float32
- func (a AVAudioUnitMIDIInstrument) RenderingAlgorithm() AVAudio3DMixingRenderingAlgorithm
- func (a AVAudioUnitMIDIInstrument) ReverbBlend() float32
- func (a AVAudioUnitMIDIInstrument) SendControllerWithValueOnChannel(controller uint8, value uint8, channel uint8)
- func (a AVAudioUnitMIDIInstrument) SendMIDIEventData1(midiStatus uint8, data1 uint8)
- func (a AVAudioUnitMIDIInstrument) SendMIDIEventData1Data2(midiStatus uint8, data1 uint8, data2 uint8)
- func (a AVAudioUnitMIDIInstrument) SendMIDIEventList(eventList objectivec.IObject)
- func (a AVAudioUnitMIDIInstrument) SendMIDISysExEvent(midiData foundation.INSData)
- func (a AVAudioUnitMIDIInstrument) SendPitchBendOnChannel(pitchbend uint16, channel uint8)
- func (a AVAudioUnitMIDIInstrument) SendPressureForKeyWithValueOnChannel(key uint8, value uint8, channel uint8)
- func (a AVAudioUnitMIDIInstrument) SendPressureOnChannel(pressure uint8, channel uint8)
- func (a AVAudioUnitMIDIInstrument) SendProgramChangeBankMSBBankLSBOnChannel(program uint8, bankMSB uint8, bankLSB uint8, channel uint8)
- func (a AVAudioUnitMIDIInstrument) SendProgramChangeOnChannel(program uint8, channel uint8)
- func (a AVAudioUnitMIDIInstrument) SetObstruction(value float32)
- func (a AVAudioUnitMIDIInstrument) SetOcclusion(value float32)
- func (a AVAudioUnitMIDIInstrument) SetPan(value float32)
- func (a AVAudioUnitMIDIInstrument) SetPointSourceInHeadMode(value AVAudio3DMixingPointSourceInHeadMode)
- func (a AVAudioUnitMIDIInstrument) SetPosition(value AVAudio3DPoint)
- func (a AVAudioUnitMIDIInstrument) SetRate(value float32)
- func (a AVAudioUnitMIDIInstrument) SetRenderingAlgorithm(value AVAudio3DMixingRenderingAlgorithm)
- func (a AVAudioUnitMIDIInstrument) SetReverbBlend(value float32)
- func (a AVAudioUnitMIDIInstrument) SetSourceMode(value AVAudio3DMixingSourceMode)
- func (a AVAudioUnitMIDIInstrument) SetVolume(value float32)
- func (a AVAudioUnitMIDIInstrument) SourceMode() AVAudio3DMixingSourceMode
- func (a AVAudioUnitMIDIInstrument) StartNoteWithVelocityOnChannel(note uint8, velocity uint8, channel uint8)
- func (a AVAudioUnitMIDIInstrument) StopNoteOnChannel(note uint8, channel uint8)
- func (a AVAudioUnitMIDIInstrument) Volume() float32
- type AVAudioUnitMIDIInstrumentClass
- type AVAudioUnitReverb
- type AVAudioUnitReverbClass
- type AVAudioUnitReverbPreset
- type AVAudioUnitSampler
- func (a AVAudioUnitSampler) Autorelease() AVAudioUnitSampler
- func (a AVAudioUnitSampler) GlobalTuning() float32
- func (a AVAudioUnitSampler) Init() AVAudioUnitSampler
- func (a AVAudioUnitSampler) LoadAudioFilesAtURLsError(audioFiles []foundation.NSURL) (bool, error)
- func (a AVAudioUnitSampler) LoadInstrumentAtURLError(instrumentURL foundation.INSURL) (bool, error)
- func (a AVAudioUnitSampler) LoadSoundBankInstrumentAtURLProgramBankMSBBankLSBError(bankURL foundation.INSURL, program uint8, bankMSB uint8, bankLSB uint8) (bool, error)
- func (a AVAudioUnitSampler) OverallGain() float32
- func (a AVAudioUnitSampler) SetGlobalTuning(value float32)
- func (a AVAudioUnitSampler) SetOverallGain(value float32)
- func (a AVAudioUnitSampler) SetStereoPan(value float32)
- func (a AVAudioUnitSampler) StereoPan() float32
- type AVAudioUnitSamplerClass
- type AVAudioUnitTimeEffect
- func (a AVAudioUnitTimeEffect) Autorelease() AVAudioUnitTimeEffect
- func (a AVAudioUnitTimeEffect) Bypass() bool
- func (a AVAudioUnitTimeEffect) Init() AVAudioUnitTimeEffect
- func (a AVAudioUnitTimeEffect) InitWithAudioComponentDescription(audioComponentDescription objectivec.IObject) AVAudioUnitTimeEffect
- func (a AVAudioUnitTimeEffect) SetBypass(value bool)
- type AVAudioUnitTimeEffectClass
- type AVAudioUnitTimePitch
- func (a AVAudioUnitTimePitch) Autorelease() AVAudioUnitTimePitch
- func (a AVAudioUnitTimePitch) Init() AVAudioUnitTimePitch
- func (a AVAudioUnitTimePitch) Overlap() float32
- func (a AVAudioUnitTimePitch) Pitch() float32
- func (a AVAudioUnitTimePitch) Rate() float32
- func (a AVAudioUnitTimePitch) SetOverlap(value float32)
- func (a AVAudioUnitTimePitch) SetPitch(value float32)
- func (a AVAudioUnitTimePitch) SetRate(value float32)
- type AVAudioUnitTimePitchClass
- type AVAudioUnitVarispeed
- type AVAudioUnitVarispeedClass
- type AVAudioVoiceProcessingOtherAudioDuckingConfiguration
- type AVAudioVoiceProcessingOtherAudioDuckingLevel
- type AVAudioVoiceProcessingSpeechActivityEvent
- type AVAudioVoiceProcessingSpeechActivityEventHandler
- type AVBeatRange
- type AVExtendedNoteOnEvent
- func AVExtendedNoteOnEventFromID(id objc.ID) AVExtendedNoteOnEvent
- func NewAVExtendedNoteOnEvent() AVExtendedNoteOnEvent
- func NewExtendedNoteOnEventWithMIDINoteVelocityGroupIDDuration(midiNote float32, velocity float32, groupID uint32, duration AVMusicTimeStamp) AVExtendedNoteOnEvent
- func NewExtendedNoteOnEventWithMIDINoteVelocityInstrumentIDGroupIDDuration(midiNote float32, velocity float32, instrumentID uint32, groupID uint32, ...) AVExtendedNoteOnEvent
- func (e AVExtendedNoteOnEvent) Autorelease() AVExtendedNoteOnEvent
- func (e AVExtendedNoteOnEvent) Duration() AVMusicTimeStamp
- func (e AVExtendedNoteOnEvent) GroupID() uint32
- func (e AVExtendedNoteOnEvent) Init() AVExtendedNoteOnEvent
- func (e AVExtendedNoteOnEvent) InitWithMIDINoteVelocityGroupIDDuration(midiNote float32, velocity float32, groupID uint32, duration AVMusicTimeStamp) AVExtendedNoteOnEvent
- func (e AVExtendedNoteOnEvent) InitWithMIDINoteVelocityInstrumentIDGroupIDDuration(midiNote float32, velocity float32, instrumentID uint32, groupID uint32, ...) AVExtendedNoteOnEvent
- func (e AVExtendedNoteOnEvent) InstrumentID() uint32
- func (e AVExtendedNoteOnEvent) MidiNote() float32
- func (e AVExtendedNoteOnEvent) SetDuration(value AVMusicTimeStamp)
- func (e AVExtendedNoteOnEvent) SetGroupID(value uint32)
- func (e AVExtendedNoteOnEvent) SetInstrumentID(value uint32)
- func (e AVExtendedNoteOnEvent) SetMidiNote(value float32)
- func (e AVExtendedNoteOnEvent) SetVelocity(value float32)
- func (e AVExtendedNoteOnEvent) Velocity() float32
- type AVExtendedNoteOnEventClass
- type AVExtendedTempoEvent
- func (e AVExtendedTempoEvent) Autorelease() AVExtendedTempoEvent
- func (e AVExtendedTempoEvent) Init() AVExtendedTempoEvent
- func (e AVExtendedTempoEvent) InitWithTempo(tempo float64) AVExtendedTempoEvent
- func (e AVExtendedTempoEvent) SetTempo(value float64)
- func (e AVExtendedTempoEvent) Tempo() float64
- type AVExtendedTempoEventClass
- type AVMIDIChannelEvent
- type AVMIDIChannelEventClass
- type AVMIDIChannelPressureEvent
- func (m AVMIDIChannelPressureEvent) Autorelease() AVMIDIChannelPressureEvent
- func (m AVMIDIChannelPressureEvent) Init() AVMIDIChannelPressureEvent
- func (m AVMIDIChannelPressureEvent) InitWithChannelPressure(channel uint32, pressure uint32) AVMIDIChannelPressureEvent
- func (m AVMIDIChannelPressureEvent) Pressure() uint32
- func (m AVMIDIChannelPressureEvent) SetPressure(value uint32)
- type AVMIDIChannelPressureEventClass
- type AVMIDIControlChangeEvent
- func (m AVMIDIControlChangeEvent) Autorelease() AVMIDIControlChangeEvent
- func (m AVMIDIControlChangeEvent) Init() AVMIDIControlChangeEvent
- func (m AVMIDIControlChangeEvent) InitWithChannelMessageTypeValue(channel uint32, messageType AVMIDIControlChangeMessageType, value uint32) AVMIDIControlChangeEvent
- func (m AVMIDIControlChangeEvent) MessageType() AVMIDIControlChangeMessageType
- func (m AVMIDIControlChangeEvent) Value() uint32
- type AVMIDIControlChangeEventClass
- type AVMIDIControlChangeMessageType
- type AVMIDIMetaEvent
- type AVMIDIMetaEventClass
- type AVMIDIMetaEventType
- type AVMIDINoteEvent
- func (m AVMIDINoteEvent) Autorelease() AVMIDINoteEvent
- func (m AVMIDINoteEvent) Channel() uint32
- func (m AVMIDINoteEvent) Duration() AVMusicTimeStamp
- func (m AVMIDINoteEvent) Init() AVMIDINoteEvent
- func (m AVMIDINoteEvent) InitWithChannelKeyVelocityDuration(channel uint32, keyNum uint32, velocity uint32, duration AVMusicTimeStamp) AVMIDINoteEvent
- func (m AVMIDINoteEvent) Key() uint32
- func (m AVMIDINoteEvent) SetChannel(value uint32)
- func (m AVMIDINoteEvent) SetDuration(value AVMusicTimeStamp)
- func (m AVMIDINoteEvent) SetKey(value uint32)
- func (m AVMIDINoteEvent) SetVelocity(value uint32)
- func (m AVMIDINoteEvent) Velocity() uint32
- type AVMIDINoteEventClass
- type AVMIDIPitchBendEvent
- func (m AVMIDIPitchBendEvent) Autorelease() AVMIDIPitchBendEvent
- func (m AVMIDIPitchBendEvent) Init() AVMIDIPitchBendEvent
- func (m AVMIDIPitchBendEvent) InitWithChannelValue(channel uint32, value uint32) AVMIDIPitchBendEvent
- func (m AVMIDIPitchBendEvent) SetValue(value uint32)
- func (m AVMIDIPitchBendEvent) Value() uint32
- type AVMIDIPitchBendEventClass
- type AVMIDIPlayer
- func AVMIDIPlayerFromID(id objc.ID) AVMIDIPlayer
- func NewAVMIDIPlayer() AVMIDIPlayer
- func NewMIDIPlayerWithContentsOfURLSoundBankURLError(inURL foundation.INSURL, bankURL foundation.INSURL) (AVMIDIPlayer, error)
- func NewMIDIPlayerWithDataSoundBankURLError(data foundation.INSData, bankURL foundation.INSURL) (AVMIDIPlayer, error)
- func (m AVMIDIPlayer) Autorelease() AVMIDIPlayer
- func (m AVMIDIPlayer) CurrentPosition() float64
- func (m AVMIDIPlayer) Duration() float64
- func (m AVMIDIPlayer) Init() AVMIDIPlayer
- func (m AVMIDIPlayer) InitWithContentsOfURLSoundBankURLError(inURL foundation.INSURL, bankURL foundation.INSURL) (AVMIDIPlayer, error)
- func (m AVMIDIPlayer) InitWithDataSoundBankURLError(data foundation.INSData, bankURL foundation.INSURL) (AVMIDIPlayer, error)
- func (m AVMIDIPlayer) Play(completionHandler ErrorHandler)
- func (m AVMIDIPlayer) Playing() bool
- func (m AVMIDIPlayer) PrepareToPlay()
- func (m AVMIDIPlayer) Rate() float32
- func (m AVMIDIPlayer) SetCurrentPosition(value float64)
- func (m AVMIDIPlayer) SetRate(value float32)
- func (m AVMIDIPlayer) Stop()
- type AVMIDIPlayerClass
- type AVMIDIPlayerCompletionHandler
- type AVMIDIPolyPressureEvent
- func (m AVMIDIPolyPressureEvent) Autorelease() AVMIDIPolyPressureEvent
- func (m AVMIDIPolyPressureEvent) Init() AVMIDIPolyPressureEvent
- func (m AVMIDIPolyPressureEvent) InitWithChannelKeyPressure(channel uint32, key uint32, pressure uint32) AVMIDIPolyPressureEvent
- func (m AVMIDIPolyPressureEvent) Key() uint32
- func (m AVMIDIPolyPressureEvent) Pressure() uint32
- func (m AVMIDIPolyPressureEvent) SetKey(value uint32)
- func (m AVMIDIPolyPressureEvent) SetPressure(value uint32)
- type AVMIDIPolyPressureEventClass
- type AVMIDIProgramChangeEvent
- func (m AVMIDIProgramChangeEvent) Autorelease() AVMIDIProgramChangeEvent
- func (m AVMIDIProgramChangeEvent) Init() AVMIDIProgramChangeEvent
- func (m AVMIDIProgramChangeEvent) InitWithChannelProgramNumber(channel uint32, programNumber uint32) AVMIDIProgramChangeEvent
- func (m AVMIDIProgramChangeEvent) ProgramNumber() uint32
- func (m AVMIDIProgramChangeEvent) SetProgramNumber(value uint32)
- type AVMIDIProgramChangeEventClass
- type AVMIDISysexEvent
- type AVMIDISysexEventClass
- type AVMusicEvent
- type AVMusicEventClass
- type AVMusicEventEnumerationBlock
- type AVMusicSequenceLoadOptions
- type AVMusicTimeStamp
- type AVMusicTrack
- func (m AVMusicTrack) AVMusicTimeStampEndOfTrack() float64
- func (m AVMusicTrack) AddEventAtBeat(event IAVMusicEvent, beat AVMusicTimeStamp)
- func (m AVMusicTrack) Autorelease() AVMusicTrack
- func (m AVMusicTrack) ClearEventsInRange(range_ AVBeatRange)
- func (m AVMusicTrack) CopyAndMergeEventsInRangeFromTrackMergeAtBeat(range_ AVBeatRange, sourceTrack IAVMusicTrack, mergeStartBeat AVMusicTimeStamp)
- func (m AVMusicTrack) CopyEventsInRangeFromTrackInsertAtBeat(range_ AVBeatRange, sourceTrack IAVMusicTrack, ...)
- func (m AVMusicTrack) CutEventsInRange(range_ AVBeatRange)
- func (m AVMusicTrack) DestinationAudioUnit() IAVAudioUnit
- func (m AVMusicTrack) DestinationMIDIEndpoint() objectivec.IObject
- func (m AVMusicTrack) EnumerateEventsInRangeUsingBlock(range_ AVBeatRange, block AVMusicEventEnumerationBlock)
- func (m AVMusicTrack) Init() AVMusicTrack
- func (m AVMusicTrack) LengthInBeats() AVMusicTimeStamp
- func (m AVMusicTrack) LengthInSeconds() float64
- func (m AVMusicTrack) LoopRange() AVBeatRange
- func (m AVMusicTrack) LoopingEnabled() bool
- func (m AVMusicTrack) MoveEventsInRangeByAmount(range_ AVBeatRange, beatAmount AVMusicTimeStamp)
- func (m AVMusicTrack) Muted() bool
- func (m AVMusicTrack) NumberOfLoops() int
- func (m AVMusicTrack) OffsetTime() AVMusicTimeStamp
- func (m AVMusicTrack) SetAVMusicTimeStampEndOfTrack(value float64)
- func (m AVMusicTrack) SetDestinationAudioUnit(value IAVAudioUnit)
- func (m AVMusicTrack) SetDestinationMIDIEndpoint(value objectivec.IObject)
- func (m AVMusicTrack) SetLengthInBeats(value AVMusicTimeStamp)
- func (m AVMusicTrack) SetLengthInSeconds(value float64)
- func (m AVMusicTrack) SetLoopRange(value AVBeatRange)
- func (m AVMusicTrack) SetLoopingEnabled(value bool)
- func (m AVMusicTrack) SetMuted(value bool)
- func (m AVMusicTrack) SetNumberOfLoops(value int)
- func (m AVMusicTrack) SetOffsetTime(value AVMusicTimeStamp)
- func (m AVMusicTrack) SetSoloed(value bool)
- func (m AVMusicTrack) SetUsesAutomatedParameters(value bool)
- func (m AVMusicTrack) Soloed() bool
- func (m AVMusicTrack) TimeResolution() uint
- func (m AVMusicTrack) UsesAutomatedParameters() bool
- type AVMusicTrackClass
- type AVMusicTrackLoopCount
- type AVMusicUserEvent
- type AVMusicUserEventClass
- type AVParameterEvent
- func (p AVParameterEvent) Autorelease() AVParameterEvent
- func (p AVParameterEvent) Element() uint32
- func (p AVParameterEvent) Init() AVParameterEvent
- func (p AVParameterEvent) InitWithParameterIDScopeElementValue(parameterID uint32, scope uint32, element uint32, value float32) AVParameterEvent
- func (p AVParameterEvent) ParameterID() uint32
- func (p AVParameterEvent) Scope() uint32
- func (p AVParameterEvent) SetElement(value uint32)
- func (p AVParameterEvent) SetParameterID(value uint32)
- func (p AVParameterEvent) SetScope(value uint32)
- func (p AVParameterEvent) SetValue(value float32)
- func (p AVParameterEvent) Value() float32
- type AVParameterEventClass
- type AVSpeechBoundary
- type AVSpeechSynthesisMarker
- func AVSpeechSynthesisMarkerFromID(id objc.ID) AVSpeechSynthesisMarker
- func NewAVSpeechSynthesisMarker() AVSpeechSynthesisMarker
- func NewSpeechSynthesisMarkerWithBookmarkNameAtByteSampleOffset(mark string, byteSampleOffset int) AVSpeechSynthesisMarker
- func NewSpeechSynthesisMarkerWithMarkerTypeForTextRangeAtByteSampleOffset(type_ AVSpeechSynthesisMarkerMark, range_ foundation.NSRange, ...) AVSpeechSynthesisMarker
- func NewSpeechSynthesisMarkerWithParagraphRangeAtByteSampleOffset(range_ foundation.NSRange, byteSampleOffset int) AVSpeechSynthesisMarker
- func NewSpeechSynthesisMarkerWithPhonemeStringAtByteSampleOffset(phoneme string, byteSampleOffset int) AVSpeechSynthesisMarker
- func NewSpeechSynthesisMarkerWithSentenceRangeAtByteSampleOffset(range_ foundation.NSRange, byteSampleOffset int) AVSpeechSynthesisMarker
- func NewSpeechSynthesisMarkerWithWordRangeAtByteSampleOffset(range_ foundation.NSRange, byteSampleOffset int) AVSpeechSynthesisMarker
- func (s AVSpeechSynthesisMarker) Autorelease() AVSpeechSynthesisMarker
- func (s AVSpeechSynthesisMarker) BookmarkName() string
- func (s AVSpeechSynthesisMarker) ByteSampleOffset() uint
- func (s AVSpeechSynthesisMarker) EncodeWithCoder(coder foundation.INSCoder)
- func (s AVSpeechSynthesisMarker) Init() AVSpeechSynthesisMarker
- func (s AVSpeechSynthesisMarker) InitWithBookmarkNameAtByteSampleOffset(mark string, byteSampleOffset int) AVSpeechSynthesisMarker
- func (s AVSpeechSynthesisMarker) InitWithMarkerTypeForTextRangeAtByteSampleOffset(type_ AVSpeechSynthesisMarkerMark, range_ foundation.NSRange, ...) AVSpeechSynthesisMarker
- func (s AVSpeechSynthesisMarker) InitWithParagraphRangeAtByteSampleOffset(range_ foundation.NSRange, byteSampleOffset int) AVSpeechSynthesisMarker
- func (s AVSpeechSynthesisMarker) InitWithPhonemeStringAtByteSampleOffset(phoneme string, byteSampleOffset int) AVSpeechSynthesisMarker
- func (s AVSpeechSynthesisMarker) InitWithSentenceRangeAtByteSampleOffset(range_ foundation.NSRange, byteSampleOffset int) AVSpeechSynthesisMarker
- func (s AVSpeechSynthesisMarker) InitWithWordRangeAtByteSampleOffset(range_ foundation.NSRange, byteSampleOffset int) AVSpeechSynthesisMarker
- func (s AVSpeechSynthesisMarker) Mark() AVSpeechSynthesisMarkerMark
- func (s AVSpeechSynthesisMarker) Phoneme() string
- func (s AVSpeechSynthesisMarker) SetBookmarkName(value string)
- func (s AVSpeechSynthesisMarker) SetByteSampleOffset(value uint)
- func (s AVSpeechSynthesisMarker) SetMark(value AVSpeechSynthesisMarkerMark)
- func (s AVSpeechSynthesisMarker) SetPhoneme(value string)
- func (s AVSpeechSynthesisMarker) SetSpeechSynthesisOutputMetadataBlock(value AVSpeechSynthesisProviderOutputBlock)
- func (s AVSpeechSynthesisMarker) SetTextRange(value foundation.NSRange)
- func (s AVSpeechSynthesisMarker) SpeechSynthesisOutputMetadataBlock() AVSpeechSynthesisProviderOutputBlock
- func (s AVSpeechSynthesisMarker) TextRange() foundation.NSRange
- type AVSpeechSynthesisMarkerClass
- type AVSpeechSynthesisMarkerMark
- type AVSpeechSynthesisPersonalVoiceAuthorizationStatus
- type AVSpeechSynthesisPersonalVoiceAuthorizationStatusHandler
- type AVSpeechSynthesisProviderAudioUnit
- func (s AVSpeechSynthesisProviderAudioUnit) Autorelease() AVSpeechSynthesisProviderAudioUnit
- func (s AVSpeechSynthesisProviderAudioUnit) CancelSpeechRequest()
- func (s AVSpeechSynthesisProviderAudioUnit) Init() AVSpeechSynthesisProviderAudioUnit
- func (s AVSpeechSynthesisProviderAudioUnit) SetSpeechSynthesisOutputMetadataBlock(value AVSpeechSynthesisProviderOutputBlock)
- func (s AVSpeechSynthesisProviderAudioUnit) SetSpeechVoices(value []AVSpeechSynthesisProviderVoice)
- func (s AVSpeechSynthesisProviderAudioUnit) SpeechSynthesisOutputMetadataBlock() AVSpeechSynthesisProviderOutputBlock
- func (s AVSpeechSynthesisProviderAudioUnit) SpeechVoices() []AVSpeechSynthesisProviderVoice
- func (s AVSpeechSynthesisProviderAudioUnit) SynthesizeSpeechRequest(speechRequest IAVSpeechSynthesisProviderRequest)
- type AVSpeechSynthesisProviderAudioUnitClass
- type AVSpeechSynthesisProviderOutputBlock
- type AVSpeechSynthesisProviderRequest
- func AVSpeechSynthesisProviderRequestFromID(id objc.ID) AVSpeechSynthesisProviderRequest
- func NewAVSpeechSynthesisProviderRequest() AVSpeechSynthesisProviderRequest
- func NewSpeechSynthesisProviderRequestWithSSMLRepresentationVoice(text string, voice IAVSpeechSynthesisProviderVoice) AVSpeechSynthesisProviderRequest
- func (s AVSpeechSynthesisProviderRequest) Autorelease() AVSpeechSynthesisProviderRequest
- func (s AVSpeechSynthesisProviderRequest) EncodeWithCoder(coder foundation.INSCoder)
- func (s AVSpeechSynthesisProviderRequest) Init() AVSpeechSynthesisProviderRequest
- func (s AVSpeechSynthesisProviderRequest) InitWithSSMLRepresentationVoice(text string, voice IAVSpeechSynthesisProviderVoice) AVSpeechSynthesisProviderRequest
- func (s AVSpeechSynthesisProviderRequest) SsmlRepresentation() string
- func (s AVSpeechSynthesisProviderRequest) Voice() IAVSpeechSynthesisProviderVoice
- type AVSpeechSynthesisProviderRequestClass
- type AVSpeechSynthesisProviderVoice
- func AVSpeechSynthesisProviderVoiceFromID(id objc.ID) AVSpeechSynthesisProviderVoice
- func NewAVSpeechSynthesisProviderVoice() AVSpeechSynthesisProviderVoice
- func NewSpeechSynthesisProviderVoiceWithNameIdentifierPrimaryLanguagesSupportedLanguages(name string, identifier string, primaryLanguages []string, ...) AVSpeechSynthesisProviderVoice
- func (s AVSpeechSynthesisProviderVoice) Age() int
- func (s AVSpeechSynthesisProviderVoice) Autorelease() AVSpeechSynthesisProviderVoice
- func (s AVSpeechSynthesisProviderVoice) EncodeWithCoder(coder foundation.INSCoder)
- func (s AVSpeechSynthesisProviderVoice) Gender() AVSpeechSynthesisVoiceGender
- func (s AVSpeechSynthesisProviderVoice) Identifier() string
- func (s AVSpeechSynthesisProviderVoice) Init() AVSpeechSynthesisProviderVoice
- func (s AVSpeechSynthesisProviderVoice) InitWithNameIdentifierPrimaryLanguagesSupportedLanguages(name string, identifier string, primaryLanguages []string, ...) AVSpeechSynthesisProviderVoice
- func (s AVSpeechSynthesisProviderVoice) Name() string
- func (s AVSpeechSynthesisProviderVoice) PrimaryLanguages() []string
- func (s AVSpeechSynthesisProviderVoice) SetAge(value int)
- func (s AVSpeechSynthesisProviderVoice) SetGender(value AVSpeechSynthesisVoiceGender)
- func (s AVSpeechSynthesisProviderVoice) SetSpeechVoices(value IAVSpeechSynthesisProviderVoice)
- func (s AVSpeechSynthesisProviderVoice) SetVersion(value string)
- func (s AVSpeechSynthesisProviderVoice) SetVoiceSize(value int64)
- func (s AVSpeechSynthesisProviderVoice) SpeechVoices() IAVSpeechSynthesisProviderVoice
- func (s AVSpeechSynthesisProviderVoice) SupportedLanguages() []string
- func (s AVSpeechSynthesisProviderVoice) Version() string
- func (s AVSpeechSynthesisProviderVoice) VoiceSize() int64
- type AVSpeechSynthesisProviderVoiceClass
- type AVSpeechSynthesisVoice
- func (s AVSpeechSynthesisVoice) AVSpeechSynthesisVoiceIdentifierAlex() string
- func (s AVSpeechSynthesisVoice) AudioFileSettings() foundation.INSDictionary
- func (s AVSpeechSynthesisVoice) Autorelease() AVSpeechSynthesisVoice
- func (s AVSpeechSynthesisVoice) EncodeWithCoder(coder foundation.INSCoder)
- func (s AVSpeechSynthesisVoice) Gender() AVSpeechSynthesisVoiceGender
- func (s AVSpeechSynthesisVoice) Identifier() string
- func (s AVSpeechSynthesisVoice) Init() AVSpeechSynthesisVoice
- func (s AVSpeechSynthesisVoice) Language() string
- func (s AVSpeechSynthesisVoice) Name() string
- func (s AVSpeechSynthesisVoice) Quality() AVSpeechSynthesisVoiceQuality
- func (s AVSpeechSynthesisVoice) SetVoice(value IAVSpeechSynthesisVoice)
- func (s AVSpeechSynthesisVoice) Voice() IAVSpeechSynthesisVoice
- func (s AVSpeechSynthesisVoice) VoiceTraits() AVSpeechSynthesisVoiceTraits
- type AVSpeechSynthesisVoiceClass
- func (ac AVSpeechSynthesisVoiceClass) Alloc() AVSpeechSynthesisVoice
- func (ac AVSpeechSynthesisVoiceClass) Class() objc.Class
- func (_AVSpeechSynthesisVoiceClass AVSpeechSynthesisVoiceClass) CurrentLanguageCode() string
- func (_AVSpeechSynthesisVoiceClass AVSpeechSynthesisVoiceClass) SpeechVoices() []AVSpeechSynthesisVoice
- type AVSpeechSynthesisVoiceGender
- type AVSpeechSynthesisVoiceQuality
- type AVSpeechSynthesisVoiceTraits
- type AVSpeechSynthesizer
- func (s AVSpeechSynthesizer) Autorelease() AVSpeechSynthesizer
- func (s AVSpeechSynthesizer) ContinueSpeaking() bool
- func (s AVSpeechSynthesizer) Delegate() AVSpeechSynthesizerDelegate
- func (s AVSpeechSynthesizer) Init() AVSpeechSynthesizer
- func (s AVSpeechSynthesizer) PauseSpeakingAtBoundary(boundary AVSpeechBoundary) bool
- func (s AVSpeechSynthesizer) Paused() bool
- func (s AVSpeechSynthesizer) PreUtteranceDelay() float64
- func (s AVSpeechSynthesizer) SetDelegate(value AVSpeechSynthesizerDelegate)
- func (s AVSpeechSynthesizer) SetPreUtteranceDelay(value float64)
- func (s AVSpeechSynthesizer) SetVoice(value IAVSpeechSynthesisVoice)
- func (s AVSpeechSynthesizer) SpeakUtterance(utterance IAVSpeechUtterance)
- func (s AVSpeechSynthesizer) Speaking() bool
- func (s AVSpeechSynthesizer) StopSpeakingAtBoundary(boundary AVSpeechBoundary) bool
- func (s AVSpeechSynthesizer) Voice() IAVSpeechSynthesisVoice
- func (s AVSpeechSynthesizer) WriteUtteranceToBufferCallback(utterance IAVSpeechUtterance, bufferCallback AVSpeechSynthesizerBufferCallback)
- func (s AVSpeechSynthesizer) WriteUtteranceToBufferCallbackToMarkerCallback(utterance IAVSpeechUtterance, bufferCallback AVSpeechSynthesizerBufferCallback, ...)
- type AVSpeechSynthesizerBufferCallback
- type AVSpeechSynthesizerClass
- func (ac AVSpeechSynthesizerClass) Alloc() AVSpeechSynthesizer
- func (_AVSpeechSynthesizerClass AVSpeechSynthesizerClass) AvailableVoicesDidChangeNotification() foundation.NSString
- func (ac AVSpeechSynthesizerClass) Class() objc.Class
- func (_AVSpeechSynthesizerClass AVSpeechSynthesizerClass) PersonalVoiceAuthorizationStatus() AVSpeechSynthesisPersonalVoiceAuthorizationStatus
- func (sc AVSpeechSynthesizerClass) RequestPersonalVoiceAuthorization(ctx context.Context) (AVSpeechSynthesisPersonalVoiceAuthorizationStatus, error)
- func (_AVSpeechSynthesizerClass AVSpeechSynthesizerClass) RequestPersonalVoiceAuthorizationWithCompletionHandler(handler AVSpeechSynthesisPersonalVoiceAuthorizationStatusHandler)
- type AVSpeechSynthesizerDelegate
- type AVSpeechSynthesizerDelegateConfig
- type AVSpeechSynthesizerDelegateObject
- func (o AVSpeechSynthesizerDelegateObject) BaseObject() objectivec.Object
- func (o AVSpeechSynthesizerDelegateObject) SpeechSynthesizerDidCancelSpeechUtterance(synthesizer IAVSpeechSynthesizer, utterance IAVSpeechUtterance)
- func (o AVSpeechSynthesizerDelegateObject) SpeechSynthesizerDidContinueSpeechUtterance(synthesizer IAVSpeechSynthesizer, utterance IAVSpeechUtterance)
- func (o AVSpeechSynthesizerDelegateObject) SpeechSynthesizerDidFinishSpeechUtterance(synthesizer IAVSpeechSynthesizer, utterance IAVSpeechUtterance)
- func (o AVSpeechSynthesizerDelegateObject) SpeechSynthesizerDidPauseSpeechUtterance(synthesizer IAVSpeechSynthesizer, utterance IAVSpeechUtterance)
- func (o AVSpeechSynthesizerDelegateObject) SpeechSynthesizerDidStartSpeechUtterance(synthesizer IAVSpeechSynthesizer, utterance IAVSpeechUtterance)
- func (o AVSpeechSynthesizerDelegateObject) SpeechSynthesizerWillSpeakMarkerUtterance(synthesizer IAVSpeechSynthesizer, marker IAVSpeechSynthesisMarker, ...)
- func (o AVSpeechSynthesizerDelegateObject) SpeechSynthesizerWillSpeakRangeOfSpeechStringUtterance(synthesizer IAVSpeechSynthesizer, characterRange foundation.NSRange, ...)
- type AVSpeechSynthesizerMarkerCallback
- type AVSpeechUtterance
- func AVSpeechUtteranceFromID(id objc.ID) AVSpeechUtterance
- func NewAVSpeechUtterance() AVSpeechUtterance
- func NewSpeechUtteranceWithAttributedString(string_ foundation.NSAttributedString) AVSpeechUtterance
- func NewSpeechUtteranceWithSSMLRepresentation(string_ string) AVSpeechUtterance
- func NewSpeechUtteranceWithString(string_ string) AVSpeechUtterance
- func (s AVSpeechUtterance) AVSpeechSynthesisIPANotationAttribute() string
- func (s AVSpeechUtterance) AVSpeechUtteranceDefaultSpeechRate() float32
- func (s AVSpeechUtterance) AVSpeechUtteranceMaximumSpeechRate() float32
- func (s AVSpeechUtterance) AVSpeechUtteranceMinimumSpeechRate() float32
- func (s AVSpeechUtterance) AttributedSpeechString() foundation.NSAttributedString
- func (s AVSpeechUtterance) Autorelease() AVSpeechUtterance
- func (s AVSpeechUtterance) EncodeWithCoder(coder foundation.INSCoder)
- func (s AVSpeechUtterance) Init() AVSpeechUtterance
- func (s AVSpeechUtterance) InitWithAttributedString(string_ foundation.NSAttributedString) AVSpeechUtterance
- func (s AVSpeechUtterance) InitWithSSMLRepresentation(string_ string) AVSpeechUtterance
- func (s AVSpeechUtterance) InitWithString(string_ string) AVSpeechUtterance
- func (s AVSpeechUtterance) PitchMultiplier() float32
- func (s AVSpeechUtterance) PostUtteranceDelay() float64
- func (s AVSpeechUtterance) PreUtteranceDelay() float64
- func (s AVSpeechUtterance) PrefersAssistiveTechnologySettings() bool
- func (s AVSpeechUtterance) Rate() float32
- func (s AVSpeechUtterance) SetPitchMultiplier(value float32)
- func (s AVSpeechUtterance) SetPostUtteranceDelay(value float64)
- func (s AVSpeechUtterance) SetPreUtteranceDelay(value float64)
- func (s AVSpeechUtterance) SetPrefersAssistiveTechnologySettings(value bool)
- func (s AVSpeechUtterance) SetRate(value float32)
- func (s AVSpeechUtterance) SetVoice(value IAVSpeechSynthesisVoice)
- func (s AVSpeechUtterance) SetVolume(value float32)
- func (s AVSpeechUtterance) SpeechString() string
- func (s AVSpeechUtterance) Voice() IAVSpeechSynthesisVoice
- func (s AVSpeechUtterance) Volume() float32
- type AVSpeechUtteranceClass
- func (ac AVSpeechUtteranceClass) Alloc() AVSpeechUtterance
- func (ac AVSpeechUtteranceClass) Class() objc.Class
- func (_AVSpeechUtteranceClass AVSpeechUtteranceClass) SpeechUtteranceWithAttributedString(string_ foundation.NSAttributedString) AVSpeechUtterance
- func (_AVSpeechUtteranceClass AVSpeechUtteranceClass) SpeechUtteranceWithSSMLRepresentation(string_ string) AVSpeechUtterance
- func (_AVSpeechUtteranceClass AVSpeechUtteranceClass) SpeechUtteranceWithString(string_ string) AVSpeechUtterance
- type AvaudiosessioninterruptionflagsShouldresume
- type AvaudiosessionsetactiveflagsNotifyothersondeactivation
- type BoolErrorHandler
- type BoolHandler
- type ErrorHandler
- type IAVAUPresetEvent
- type IAVAudioApplication
- type IAVAudioBuffer
- type IAVAudioChannelLayout
- type IAVAudioCompressedBuffer
- type IAVAudioConnectionPoint
- type IAVAudioConverter
- type IAVAudioEngine
- type IAVAudioEnvironmentDistanceAttenuationParameters
- type IAVAudioEnvironmentNode
- type IAVAudioEnvironmentReverbParameters
- type IAVAudioFile
- type IAVAudioFormat
- type IAVAudioIONode
- type IAVAudioInputNode
- type IAVAudioMixerNode
- type IAVAudioMixingDestination
- type IAVAudioNode
- type IAVAudioOutputNode
- type IAVAudioPCMBuffer
- type IAVAudioPlayer
- type IAVAudioPlayerNode
- type IAVAudioRecorder
- type IAVAudioRoutingArbiter
- type IAVAudioSequencer
- type IAVAudioSessionCapability
- type IAVAudioSinkNode
- type IAVAudioSourceNode
- type IAVAudioTime
- type IAVAudioUnit
- type IAVAudioUnitComponent
- type IAVAudioUnitComponentManager
- type IAVAudioUnitDelay
- type IAVAudioUnitDistortion
- type IAVAudioUnitEQ
- type IAVAudioUnitEQFilterParameters
- type IAVAudioUnitEffect
- type IAVAudioUnitGenerator
- type IAVAudioUnitMIDIInstrument
- type IAVAudioUnitReverb
- type IAVAudioUnitSampler
- type IAVAudioUnitTimeEffect
- type IAVAudioUnitTimePitch
- type IAVAudioUnitVarispeed
- type IAVExtendedNoteOnEvent
- type IAVExtendedTempoEvent
- type IAVMIDIChannelEvent
- type IAVMIDIChannelPressureEvent
- type IAVMIDIControlChangeEvent
- type IAVMIDIMetaEvent
- type IAVMIDINoteEvent
- type IAVMIDIPitchBendEvent
- type IAVMIDIPlayer
- type IAVMIDIPolyPressureEvent
- type IAVMIDIProgramChangeEvent
- type IAVMIDISysexEvent
- type IAVMusicEvent
- type IAVMusicTrack
- type IAVMusicUserEvent
- type IAVParameterEvent
- type IAVSpeechSynthesisMarker
- type IAVSpeechSynthesisProviderAudioUnit
- type IAVSpeechSynthesisProviderRequest
- type IAVSpeechSynthesisProviderVoice
- type IAVSpeechSynthesisVoice
- type IAVSpeechSynthesizer
- type IAVSpeechUtterance
Constants ¶
This section is empty.
Variables ¶
var ( // AVAudioApplicationInputMuteStateChangeNotification is a notification the system posts when the app’s audio input mute state changes. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioApplication/inputMuteStateChangeNotification AVAudioApplicationInputMuteStateChangeNotification foundation.NSNotificationName // AVAudioUnitComponentManagerRegistrationsChangedNotification is a notification the component manager generates when it updates its list of components. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponentManager/registrationsChangedNotification AVAudioUnitComponentManagerRegistrationsChangedNotification foundation.NSNotificationName // AVSpeechSynthesisAvailableVoicesDidChangeNotification is a notification that indicates a change in available voices for speech synthesis. // // See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesizer/availableVoicesDidChangeNotification AVSpeechSynthesisAvailableVoicesDidChangeNotification foundation.NSNotificationName )
var ( // AVAudioApplicationMuteStateKey is a user information key to determine the app’s audio mute state. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioApplication/muteStateKey AVAudioApplicationMuteStateKey string // AVAudioBitRateStrategy_Constant is a value that represents a constant bit rate strategy. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioBitRateStrategy_Constant AVAudioBitRateStrategy_Constant string // AVAudioBitRateStrategy_LongTermAverage is a value that represents an average bit rate strategy. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioBitRateStrategy_LongTermAverage AVAudioBitRateStrategy_LongTermAverage string // AVAudioBitRateStrategy_Variable is a value that represents a variable bit rate strategy. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioBitRateStrategy_Variable AVAudioBitRateStrategy_Variable string // AVAudioBitRateStrategy_VariableConstrained is a value that represents a constrained variable bit rate strategy. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioBitRateStrategy_VariableConstrained AVAudioBitRateStrategy_VariableConstrained string // AVAudioEngineConfigurationChangeNotification is a notification the framework posts when the audio engine configuration changes. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngineConfigurationChangeNotification AVAudioEngineConfigurationChangeNotification string // AVAudioFileTypeKey is a string that indicates the audio file type. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioFileTypeKey AVAudioFileTypeKey string // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/mutestatekey AVAudioSessionMuteStateKey foundation.NSString // AVAudioUnitComponentTagsDidChangeNotification is a notification that indicates when component tags change. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponentTagsDidChangeNotification AVAudioUnitComponentTagsDidChangeNotification string // AVAudioUnitManufacturerNameApple is the audio unit manufacturer is Apple. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitManufacturerNameApple AVAudioUnitManufacturerNameApple string // AVAudioUnitTypeEffect is an audio unit type that represents an effect. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTypeEffect AVAudioUnitTypeEffect string // AVAudioUnitTypeFormatConverter is an audio unit type that represents a format converter. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTypeFormatConverter AVAudioUnitTypeFormatConverter string // AVAudioUnitTypeGenerator is an audio unit type that represents a generator. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTypeGenerator AVAudioUnitTypeGenerator string // AVAudioUnitTypeMIDIProcessor is an audio unit type that represents a MIDI processor. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTypeMIDIProcessor AVAudioUnitTypeMIDIProcessor string // AVAudioUnitTypeMixer is an audio unit type that represents a mixer. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTypeMixer AVAudioUnitTypeMixer string // AVAudioUnitTypeMusicDevice is an audio unit type that represents a music device. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTypeMusicDevice AVAudioUnitTypeMusicDevice string // AVAudioUnitTypeMusicEffect is an audio unit type that represents a music effect. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTypeMusicEffect AVAudioUnitTypeMusicEffect string // AVAudioUnitTypeOfflineEffect is an audio unit type that represents an offline effect. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTypeOfflineEffect AVAudioUnitTypeOfflineEffect string // AVAudioUnitTypeOutput is an audio unit type that represents an output. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTypeOutput AVAudioUnitTypeOutput string // AVAudioUnitTypePanner is an audio unit type that represents a panner. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTypePanner AVAudioUnitTypePanner string // See: https://developer.apple.com/documentation/AVFAudio/AVChannelLayoutKey AVChannelLayoutKey string // See: https://developer.apple.com/documentation/AVFAudio/AVEncoderASPFrequencyKey AVEncoderASPFrequencyKey string // See: https://developer.apple.com/documentation/AVFAudio/AVEncoderAudioQualityForVBRKey AVEncoderAudioQualityForVBRKey string // AVEncoderAudioQualityKey is a constant that represents an integer from the audio quality enumeration. // // See: https://developer.apple.com/documentation/AVFAudio/AVEncoderAudioQualityKey AVEncoderAudioQualityKey string // See: https://developer.apple.com/documentation/AVFAudio/AVEncoderBitDepthHintKey AVEncoderBitDepthHintKey string // See: https://developer.apple.com/documentation/AVFAudio/AVEncoderBitRateKey AVEncoderBitRateKey string // See: https://developer.apple.com/documentation/AVFAudio/AVEncoderBitRatePerChannelKey AVEncoderBitRatePerChannelKey string // AVEncoderBitRateStrategyKey is a constant that represents the bit rate strategy for the encoder to use. // // See: https://developer.apple.com/documentation/AVFAudio/AVEncoderBitRateStrategyKey AVEncoderBitRateStrategyKey string // See: https://developer.apple.com/documentation/AVFAudio/AVEncoderContentSourceKey AVEncoderContentSourceKey string // See: https://developer.apple.com/documentation/AVFAudio/AVEncoderDynamicRangeControlConfigurationKey AVEncoderDynamicRangeControlConfigurationKey string // AVFormatIDKey is an integer value that represents the format of the audio data. // // See: https://developer.apple.com/documentation/AVFAudio/AVFormatIDKey AVFormatIDKey string // See: https://developer.apple.com/documentation/AVFAudio/AVLinearPCMBitDepthKey AVLinearPCMBitDepthKey string // See: https://developer.apple.com/documentation/AVFAudio/AVLinearPCMIsBigEndianKey AVLinearPCMIsBigEndianKey string // See: https://developer.apple.com/documentation/AVFAudio/AVLinearPCMIsFloatKey AVLinearPCMIsFloatKey string // See: https://developer.apple.com/documentation/AVFAudio/AVLinearPCMIsNonInterleaved AVLinearPCMIsNonInterleaved string // AVNumberOfChannelsKey is an integer value that represents the number of channels. // // See: https://developer.apple.com/documentation/AVFAudio/AVNumberOfChannelsKey AVNumberOfChannelsKey string // AVSampleRateConverterAlgorithmKey is a string value that represents the sample rate converter algorithm to use. // // See: https://developer.apple.com/documentation/AVFAudio/AVSampleRateConverterAlgorithmKey AVSampleRateConverterAlgorithmKey string // AVSampleRateConverterAlgorithm_Mastering is the mastering encoder bit rate strategy. // // See: https://developer.apple.com/documentation/AVFAudio/AVSampleRateConverterAlgorithm_Mastering AVSampleRateConverterAlgorithm_Mastering string // AVSampleRateConverterAlgorithm_MinimumPhase is the minimum phase encoder bit rate strategy. // // See: https://developer.apple.com/documentation/AVFAudio/AVSampleRateConverterAlgorithm_MinimumPhase AVSampleRateConverterAlgorithm_MinimumPhase string // AVSampleRateConverterAlgorithm_Normal is the usual encoder bit rate strategy. // // See: https://developer.apple.com/documentation/AVFAudio/AVSampleRateConverterAlgorithm_Normal AVSampleRateConverterAlgorithm_Normal string // AVSampleRateConverterAudioQualityKey is an integer value that represents the audio quality for conversion. // // See: https://developer.apple.com/documentation/AVFAudio/AVSampleRateConverterAudioQualityKey AVSampleRateConverterAudioQualityKey string // AVSampleRateKey is a floating point value that represents the sample rate, in hertz. // // See: https://developer.apple.com/documentation/AVFAudio/AVSampleRateKey AVSampleRateKey string // AVSpeechSynthesisIPANotationAttribute is a string that contains International Phonetic Alphabet (IPA) symbols the speech synthesizer uses to control pronunciation of certain words or phrases. // // See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisIPANotationAttribute AVSpeechSynthesisIPANotationAttribute string // AVSpeechSynthesisVoiceIdentifierAlex is the voice that the system identifies as Alex. // // See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisVoiceIdentifierAlex AVSpeechSynthesisVoiceIdentifierAlex string )
var ( // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/availableinputschangenotification AVAudioSessionAvailableInputsChangeNotification foundation.NSNotification // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/interruptionnotification AVAudioSessionInterruptionNotification foundation.NSNotification // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/mediaserviceswerelostnotification AVAudioSessionMediaServicesWereLostNotification foundation.NSNotification // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/mediaserviceswereresetnotification AVAudioSessionMediaServicesWereResetNotification foundation.NSNotification // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/microphoneinjectioncapabilitieschangenotification AVAudioSessionMicrophoneInjectionCapabilitiesChangeNotification foundation.NSNotification // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/outputmutestatechangenotification AVAudioSessionOutputMuteStateChangeNotification foundation.NSNotification // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/renderingcapabilitieschangenotification AVAudioSessionRenderingCapabilitiesChangeNotification foundation.NSNotification // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/renderingmodechangenotification AVAudioSessionRenderingModeChangeNotification foundation.NSNotification // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/routechangenotification AVAudioSessionRouteChangeNotification foundation.NSNotification // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/silencesecondaryaudiohintnotification AVAudioSessionSilenceSecondaryAudioHintNotification foundation.NSNotification // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/spatialplaybackcapabilitieschangednotification AVAudioSessionSpatialPlaybackCapabilitiesChangedNotification foundation.NSNotification // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/userintenttounmuteoutputnotification AVAudioSessionUserIntentToUnmuteOutputNotification foundation.NSNotification )
var ( // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/category-swift.struct/ambient AVAudioSessionCategoryAmbient objc.ID // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/category-swift.struct/audioprocessing AVAudioSessionCategoryAudioProcessing objc.ID // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/category-swift.struct/multiroute AVAudioSessionCategoryMultiRoute objc.ID // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/category-swift.struct/playandrecord AVAudioSessionCategoryPlayAndRecord objc.ID // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/category-swift.struct/playback AVAudioSessionCategoryPlayback objc.ID // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/category-swift.struct/record AVAudioSessionCategoryRecord objc.ID // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/category-swift.struct/soloambient AVAudioSessionCategorySoloAmbient objc.ID // AVAudioSessionLocationLower is a value that indicates that the data source is located near the bottom end of the device. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/Location/lower AVAudioSessionLocationLower objc.ID // AVAudioSessionLocationUpper is a value that indicates that the data source is located near the top end of the device. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/Location/upper AVAudioSessionLocationUpper objc.ID // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/mode-swift.struct/default AVAudioSessionModeDefault objc.ID // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/mode-swift.struct/dualroute AVAudioSessionModeDualRoute objc.ID // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/mode-swift.struct/gamechat AVAudioSessionModeGameChat objc.ID // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/mode-swift.struct/measurement AVAudioSessionModeMeasurement objc.ID // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/mode-swift.struct/movieplayback AVAudioSessionModeMoviePlayback objc.ID // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/mode-swift.struct/shortformvideo AVAudioSessionModeShortFormVideo objc.ID // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/mode-swift.struct/spokenaudio AVAudioSessionModeSpokenAudio objc.ID // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/mode-swift.struct/videochat AVAudioSessionModeVideoChat objc.ID // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/mode-swift.struct/videorecording AVAudioSessionModeVideoRecording objc.ID // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/mode-swift.struct/voicechat AVAudioSessionModeVoiceChat objc.ID // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/mode-swift.struct/voiceprompt AVAudioSessionModeVoicePrompt objc.ID // AVAudioSessionOrientationBack is a data source that points outward from the back of the device, away from the user. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/Orientation/back AVAudioSessionOrientationBack objc.ID // AVAudioSessionOrientationBottom is a data source that points downward. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/Orientation/bottom AVAudioSessionOrientationBottom objc.ID // AVAudioSessionOrientationFront is a data source that points outward from the front of the device, toward the user. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/Orientation/front AVAudioSessionOrientationFront objc.ID // AVAudioSessionOrientationLeft is a data source that points outward to the left of the device, away from the user. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/Orientation/left AVAudioSessionOrientationLeft objc.ID // AVAudioSessionOrientationRight is a data source that points outward to the right of the device, away from the user. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/Orientation/right AVAudioSessionOrientationRight objc.ID // AVAudioSessionOrientationTop is a data source that points upward. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/Orientation/top AVAudioSessionOrientationTop objc.ID // AVAudioSessionPolarPatternCardioid is a data source that’s most sensitive to sound from the direction of the data source and is nearly insensitive to sound from the opposite direction. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/PolarPattern/cardioid AVAudioSessionPolarPatternCardioid objc.ID // AVAudioSessionPolarPatternOmnidirectional is a data source that’s equally sensitive to sound from any direction. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/PolarPattern/omnidirectional AVAudioSessionPolarPatternOmnidirectional objc.ID // See: https://developer.apple.com/documentation/avfaudio/avaudiosession/polarpattern/stereo AVAudioSessionPolarPatternStereo objc.ID // AVAudioSessionPolarPatternSubcardioid is a data source that’s most sensitive to sound from the direction of the data source and is less sensitive to sound from the opposite direction. // // See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/PolarPattern/subcardioid AVAudioSessionPolarPatternSubcardioid objc.ID )
var ( // AVSpeechUtteranceDefaultSpeechRate is the default rate the speech synthesizer uses when speaking an utterance. // // See: https://developer.apple.com/documentation/AVFAudio/AVSpeechUtteranceDefaultSpeechRate AVSpeechUtteranceDefaultSpeechRate float32 // AVSpeechUtteranceMaximumSpeechRate is the maximum rate the speech synthesizer uses when speaking an utterance. // // See: https://developer.apple.com/documentation/AVFAudio/AVSpeechUtteranceMaximumSpeechRate AVSpeechUtteranceMaximumSpeechRate float32 // AVSpeechUtteranceMinimumSpeechRate is the minimum rate the speech synthesizer uses when speaking an utterance. // // See: https://developer.apple.com/documentation/AVFAudio/AVSpeechUtteranceMinimumSpeechRate AVSpeechUtteranceMinimumSpeechRate float32 )
var AVAudioSequencerInfoDictionaryKeys struct { // Album: A key that represents the album. Album AVAudioSequencerInfoDictionaryKey // ApproximateDurationInSeconds: A key that represents the approximate duration. ApproximateDurationInSeconds AVAudioSequencerInfoDictionaryKey // Artist: A key that represents the artist. Artist AVAudioSequencerInfoDictionaryKey // ChannelLayout: A key that represents the channel layout. ChannelLayout AVAudioSequencerInfoDictionaryKey // Comments: A key that represents the comments. Comments AVAudioSequencerInfoDictionaryKey // Composer: A key that represents the composer. Composer AVAudioSequencerInfoDictionaryKey // Copyright: A key that represents the copyright statement. Copyright AVAudioSequencerInfoDictionaryKey // EncodingApplication: A key that represents the encoding application. EncodingApplication AVAudioSequencerInfoDictionaryKey // Genre: A key that represents the genre. Genre AVAudioSequencerInfoDictionaryKey // ISRC: A key that represents the international standard recording code. ISRC AVAudioSequencerInfoDictionaryKey // KeySignature: A key that represents the key signature. KeySignature AVAudioSequencerInfoDictionaryKey // Lyricist: A key that represents the lyricist. Lyricist AVAudioSequencerInfoDictionaryKey // NominalBitRate: A key that represents the nominal bit rate. NominalBitRate AVAudioSequencerInfoDictionaryKey // RecordedDate: A key that represents the date of the recording. RecordedDate AVAudioSequencerInfoDictionaryKey // SourceBitDepth: A key that represents the bit depth of the source. SourceBitDepth AVAudioSequencerInfoDictionaryKey // SourceEncoder: A key that represents the encoder the source uses. SourceEncoder AVAudioSequencerInfoDictionaryKey // SubTitle: A key that represents the subtitle. SubTitle AVAudioSequencerInfoDictionaryKey // Tempo: A key that represents the tempo. Tempo AVAudioSequencerInfoDictionaryKey // TimeSignature: A key that represents the time signature. TimeSignature AVAudioSequencerInfoDictionaryKey // Title: A key that represents the title. Title AVAudioSequencerInfoDictionaryKey // TrackNumber: A key that represents the track number. TrackNumber AVAudioSequencerInfoDictionaryKey // Year: A key that represents the year. Year AVAudioSequencerInfoDictionaryKey }
AVAudioSequencerInfoDictionaryKeys provides typed accessors for AVAudioSequencerInfoDictionaryKey constants.
var ( // AVExtendedNoteOnEventDefaultInstrument is a constant that represents the default instrument identifier. // // See: https://developer.apple.com/documentation/AVFAudio/AVExtendedNoteOnEvent/defaultInstrument AVExtendedNoteOnEventDefaultInstrument uint32 )
Functions ¶
func NewAVAudioApplicationMicrophoneInjectionPermissionBlock ¶
func NewAVAudioApplicationMicrophoneInjectionPermissionBlock(handler AVAudioApplicationMicrophoneInjectionPermissionHandler) (objc.ID, func())
NewAVAudioApplicationMicrophoneInjectionPermissionBlock wraps a Go AVAudioApplicationMicrophoneInjectionPermissionHandler as an Objective-C block. The caller must defer the returned cleanup function.
Used by:
- [AVAudioApplication.RequestMicrophoneInjectionPermissionWithCompletionHandler]
func NewAVAudioUnitComponentBlock ¶
func NewAVAudioUnitComponentBlock(handler AVAudioUnitComponentHandler) (objc.ID, func())
NewAVAudioUnitComponentBlock wraps a Go AVAudioUnitComponentHandler as an Objective-C block. The caller must defer the returned cleanup function.
Used by:
func NewAVAudioUnitErrorBlock ¶
func NewAVAudioUnitErrorBlock(handler AVAudioUnitErrorHandler) (objc.ID, func())
NewAVAudioUnitErrorBlock wraps a Go AVAudioUnitErrorHandler as an Objective-C block. The caller must defer the returned cleanup function.
Used by:
- [AVAudioUnit.InstantiateWithComponentDescriptionOptionsCompletionHandler]
func NewAVAudioVoiceProcessingSpeechActivityEventBlock ¶
func NewAVAudioVoiceProcessingSpeechActivityEventBlock(handler AVAudioVoiceProcessingSpeechActivityEventHandler) (objc.ID, func())
NewAVAudioVoiceProcessingSpeechActivityEventBlock wraps a Go AVAudioVoiceProcessingSpeechActivityEventHandler as an Objective-C block. The caller must defer the returned cleanup function.
Used by:
func NewAVSpeechSynthesisPersonalVoiceAuthorizationStatusBlock ¶
func NewAVSpeechSynthesisPersonalVoiceAuthorizationStatusBlock(handler AVSpeechSynthesisPersonalVoiceAuthorizationStatusHandler) (objc.ID, func())
NewAVSpeechSynthesisPersonalVoiceAuthorizationStatusBlock wraps a Go AVSpeechSynthesisPersonalVoiceAuthorizationStatusHandler as an Objective-C block. The caller must defer the returned cleanup function.
Used by:
- [AVSpeechSynthesizer.RequestPersonalVoiceAuthorizationWithCompletionHandler]
func NewBoolBlock ¶
func NewBoolBlock(handler BoolHandler) (objc.ID, func())
NewBoolBlock wraps a Go BoolHandler as an Objective-C block. The caller must defer the returned cleanup function.
Used by:
- [AVAudioApplication.RequestRecordPermissionWithCompletionHandler]
- AVAudioApplication.SetInputMuteStateChangeHandlerError
func NewBoolErrorBlock ¶
func NewBoolErrorBlock(handler BoolErrorHandler) (objc.ID, func())
NewBoolErrorBlock wraps a Go BoolErrorHandler as an Objective-C block. The caller must defer the returned cleanup function.
Used by:
func NewErrorBlock ¶
func NewErrorBlock(handler ErrorHandler) (objc.ID, func())
NewErrorBlock wraps a Go ErrorHandler as an Objective-C block. The caller must defer the returned cleanup function.
Used by:
- AVAudioPlayerNode.ScheduleBufferAtTimeOptionsCompletionCallbackTypeCompletionHandler
- AVAudioPlayerNode.ScheduleBufferAtTimeOptionsCompletionHandler
- AVAudioPlayerNode.ScheduleBufferCompletionCallbackTypeCompletionHandler
- AVAudioPlayerNode.ScheduleBufferCompletionHandler
- AVAudioPlayerNode.ScheduleFileAtTimeCompletionCallbackTypeCompletionHandler
- AVAudioPlayerNode.ScheduleFileAtTimeCompletionHandler
- AVAudioPlayerNode.ScheduleSegmentStartingFrameFrameCountAtTimeCompletionCallbackTypeCompletionHandler
- AVAudioPlayerNode.ScheduleSegmentStartingFrameFrameCountAtTimeCompletionHandler
- AVMIDIPlayer.Play
func NewconstAudioBufferListBlock ¶
NewconstAudioBufferListBlock wraps a Go [constAudioBufferListHandler] as an Objective-C block. The caller must defer the returned cleanup function.
Used by:
Types ¶
type AVAUPresetEvent ¶
type AVAUPresetEvent struct {
AVMusicEvent
}
An object that represents a preset load and change on the music track’s destination audio unit.
Creating a Preset Event ¶
- AVAUPresetEvent.InitWithScopeElementDictionary: Creates an event with the scope, element, and dictionary for the preset.
Configuring a Preset Event ¶
- AVAUPresetEvent.Scope: The audio unit scope.
- AVAUPresetEvent.SetScope
- AVAUPresetEvent.Element: The element index in the scope.
- AVAUPresetEvent.SetElement
- AVAUPresetEvent.PresetDictionary: The dictionary that contains the preset.
See: https://developer.apple.com/documentation/AVFAudio/AVAUPresetEvent
func AVAUPresetEventFromID ¶
func AVAUPresetEventFromID(id objc.ID) AVAUPresetEvent
AVAUPresetEventFromID constructs a AVAUPresetEvent from an objc.ID.
An object that represents a preset load and change on the music track’s destination audio unit.
func NewAUPresetEventWithScopeElementDictionary ¶
func NewAUPresetEventWithScopeElementDictionary(scope uint32, element uint32, presetDictionary foundation.INSDictionary) AVAUPresetEvent
Creates an event with the scope, element, and dictionary for the preset.
scope: The audio unit scope.
element: The element index in the scope.
presetDictionary: The dictionary that contains the preset.
Discussion ¶
The system copies the dictionary you specify and isn’t editable once it creates the event. The `scope` parameter must be kAudioUnitScope_Global, and the element index should be `0`.
See: https://developer.apple.com/documentation/AVFAudio/AVAUPresetEvent/init(scope:element:dictionary:)
func NewAVAUPresetEvent ¶
func NewAVAUPresetEvent() AVAUPresetEvent
NewAVAUPresetEvent creates a new AVAUPresetEvent instance.
func (AVAUPresetEvent) Autorelease ¶
func (a AVAUPresetEvent) Autorelease() AVAUPresetEvent
Autorelease adds the receiver to the current autorelease pool.
func (AVAUPresetEvent) Element ¶
func (a AVAUPresetEvent) Element() uint32
The element index in the scope.
See: https://developer.apple.com/documentation/AVFAudio/AVAUPresetEvent/element
func (AVAUPresetEvent) Init ¶
func (a AVAUPresetEvent) Init() AVAUPresetEvent
Init initializes the instance.
func (AVAUPresetEvent) InitWithScopeElementDictionary ¶
func (a AVAUPresetEvent) InitWithScopeElementDictionary(scope uint32, element uint32, presetDictionary foundation.INSDictionary) AVAUPresetEvent
Creates an event with the scope, element, and dictionary for the preset.
scope: The audio unit scope.
element: The element index in the scope.
presetDictionary: The dictionary that contains the preset.
Discussion ¶
The system copies the dictionary you specify and isn’t editable once it creates the event. The `scope` parameter must be kAudioUnitScope_Global, and the element index should be `0`.
See: https://developer.apple.com/documentation/AVFAudio/AVAUPresetEvent/init(scope:element:dictionary:)
func (AVAUPresetEvent) PresetDictionary ¶
func (a AVAUPresetEvent) PresetDictionary() foundation.INSDictionary
The dictionary that contains the preset.
See: https://developer.apple.com/documentation/AVFAudio/AVAUPresetEvent/presetDictionary
func (AVAUPresetEvent) Scope ¶
func (a AVAUPresetEvent) Scope() uint32
The audio unit scope.
See: https://developer.apple.com/documentation/AVFAudio/AVAUPresetEvent/scope
func (AVAUPresetEvent) SetElement ¶
func (a AVAUPresetEvent) SetElement(value uint32)
func (AVAUPresetEvent) SetScope ¶
func (a AVAUPresetEvent) SetScope(value uint32)
type AVAUPresetEventClass ¶
type AVAUPresetEventClass struct {
// contains filtered or unexported fields
}
func GetAVAUPresetEventClass ¶
func GetAVAUPresetEventClass() AVAUPresetEventClass
GetAVAUPresetEventClass returns the class object for AVAUPresetEvent.
func (AVAUPresetEventClass) Alloc ¶
func (ac AVAUPresetEventClass) Alloc() AVAUPresetEvent
Alloc allocates memory for a new instance of the class.
func (AVAUPresetEventClass) Class ¶
func (ac AVAUPresetEventClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudio3DAngularOrientation ¶
type AVAudio3DAngularOrientation struct {
Yaw float32 // The side-to-side movement of the listener’s head.
Pitch float32 // The up-and-down movement of the listener’s head.
Roll float32 // The tilt of the listener’s head.
}
C struct types AVAudio3DAngularOrientation - A structure that represents the angular orientation of the listener in 3D space.
[Full Topic] [Full Topic]: https://developer.apple.com/documentation/AVFAudio/AVAudio3DAngularOrientation
type AVAudio3DMixing ¶
type AVAudio3DMixing interface {
objectivec.IObject
// A value that simulates filtering of the direct path of sound due to an obstacle.
//
// See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/obstruction
Obstruction() float32
// A value that simulates filtering of the direct and reverb paths of sound due to an obstacle.
//
// See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/occlusion
Occlusion() float32
// The location of the source in the 3D environment.
//
// See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/position
Position() AVAudio3DPoint
// A value that changes the playback rate of the input signal.
//
// See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/rate
Rate() float32
// The in-head mode for a point source.
//
// See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/pointSourceInHeadMode
PointSourceInHeadMode() AVAudio3DMixingPointSourceInHeadMode
// A value that controls the blend of dry and reverb processed audio.
//
// See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/reverbBlend
ReverbBlend() float32
// The source mode for the input bus of the audio environment node.
//
// See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/sourceMode
SourceMode() AVAudio3DMixingSourceMode
// The type of rendering algorithm the mixer uses.
//
// See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/renderingAlgorithm
RenderingAlgorithm() AVAudio3DMixingRenderingAlgorithm
// A value that simulates filtering of the direct path of sound due to an obstacle.
//
// See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/obstruction
SetObstruction(value float32)
// A value that simulates filtering of the direct and reverb paths of sound due to an obstacle.
//
// See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/occlusion
SetOcclusion(value float32)
// The location of the source in the 3D environment.
//
// See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/position
SetPosition(value AVAudio3DPoint)
// A value that changes the playback rate of the input signal.
//
// See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/rate
SetRate(value float32)
// The in-head mode for a point source.
//
// See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/pointSourceInHeadMode
SetPointSourceInHeadMode(value AVAudio3DMixingPointSourceInHeadMode)
// A value that controls the blend of dry and reverb processed audio.
//
// See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/reverbBlend
SetReverbBlend(value float32)
// The source mode for the input bus of the audio environment node.
//
// See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/sourceMode
SetSourceMode(value AVAudio3DMixingSourceMode)
// The type of rendering algorithm the mixer uses.
//
// See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/renderingAlgorithm
SetRenderingAlgorithm(value AVAudio3DMixingRenderingAlgorithm)
}
A collection of properties that define 3D mixing properties.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing
type AVAudio3DMixingObject ¶
type AVAudio3DMixingObject struct {
objectivec.Object
}
AVAudio3DMixingObject wraps an existing Objective-C object that conforms to the AVAudio3DMixing protocol.
func AVAudio3DMixingObjectFromID ¶
func AVAudio3DMixingObjectFromID(id objc.ID) AVAudio3DMixingObject
AVAudio3DMixingObjectFromID constructs a AVAudio3DMixingObject from an objc.ID. The object is determined to conform to the protocol at runtime.
func (AVAudio3DMixingObject) BaseObject ¶
func (o AVAudio3DMixingObject) BaseObject() objectivec.Object
func (AVAudio3DMixingObject) Obstruction ¶
func (o AVAudio3DMixingObject) Obstruction() float32
A value that simulates filtering of the direct path of sound due to an obstacle.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/obstruction
func (AVAudio3DMixingObject) Occlusion ¶
func (o AVAudio3DMixingObject) Occlusion() float32
A value that simulates filtering of the direct and reverb paths of sound due to an obstacle.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/occlusion
func (AVAudio3DMixingObject) PointSourceInHeadMode ¶
func (o AVAudio3DMixingObject) PointSourceInHeadMode() AVAudio3DMixingPointSourceInHeadMode
The in-head mode for a point source.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/pointSourceInHeadMode
func (AVAudio3DMixingObject) Position ¶
func (o AVAudio3DMixingObject) Position() AVAudio3DPoint
The location of the source in the 3D environment.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/position
func (AVAudio3DMixingObject) Rate ¶
func (o AVAudio3DMixingObject) Rate() float32
A value that changes the playback rate of the input signal.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/rate
func (AVAudio3DMixingObject) RenderingAlgorithm ¶
func (o AVAudio3DMixingObject) RenderingAlgorithm() AVAudio3DMixingRenderingAlgorithm
The type of rendering algorithm the mixer uses.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/renderingAlgorithm
func (AVAudio3DMixingObject) ReverbBlend ¶
func (o AVAudio3DMixingObject) ReverbBlend() float32
A value that controls the blend of dry and reverb processed audio.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/reverbBlend
func (AVAudio3DMixingObject) SetObstruction ¶
func (o AVAudio3DMixingObject) SetObstruction(value float32)
func (AVAudio3DMixingObject) SetOcclusion ¶
func (o AVAudio3DMixingObject) SetOcclusion(value float32)
func (AVAudio3DMixingObject) SetPointSourceInHeadMode ¶
func (o AVAudio3DMixingObject) SetPointSourceInHeadMode(value AVAudio3DMixingPointSourceInHeadMode)
func (AVAudio3DMixingObject) SetPosition ¶
func (o AVAudio3DMixingObject) SetPosition(value AVAudio3DPoint)
func (AVAudio3DMixingObject) SetRate ¶
func (o AVAudio3DMixingObject) SetRate(value float32)
func (AVAudio3DMixingObject) SetRenderingAlgorithm ¶
func (o AVAudio3DMixingObject) SetRenderingAlgorithm(value AVAudio3DMixingRenderingAlgorithm)
func (AVAudio3DMixingObject) SetReverbBlend ¶
func (o AVAudio3DMixingObject) SetReverbBlend(value float32)
func (AVAudio3DMixingObject) SetSourceMode ¶
func (o AVAudio3DMixingObject) SetSourceMode(value AVAudio3DMixingSourceMode)
func (AVAudio3DMixingObject) SourceMode ¶
func (o AVAudio3DMixingObject) SourceMode() AVAudio3DMixingSourceMode
The source mode for the input bus of the audio environment node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/sourceMode
type AVAudio3DMixingPointSourceInHeadMode ¶
type AVAudio3DMixingPointSourceInHeadMode int
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixingPointSourceInHeadMode
const ( // AVAudio3DMixingPointSourceInHeadModeBypass: The point source distributes into each output channel inside the head of the listener. AVAudio3DMixingPointSourceInHeadModeBypass AVAudio3DMixingPointSourceInHeadMode = 1 // AVAudio3DMixingPointSourceInHeadModeMono: The point source remains a single mono source inside the head of the listener regardless of the channels it consists of. AVAudio3DMixingPointSourceInHeadModeMono AVAudio3DMixingPointSourceInHeadMode = 0 )
func (AVAudio3DMixingPointSourceInHeadMode) String ¶
func (e AVAudio3DMixingPointSourceInHeadMode) String() string
type AVAudio3DMixingRenderingAlgorithm ¶
type AVAudio3DMixingRenderingAlgorithm int
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixingRenderingAlgorithm
const ( // AVAudio3DMixingRenderingAlgorithmAuto: Automatically selects the highest-quality rendering algorithm available for the current playback hardware. AVAudio3DMixingRenderingAlgorithmAuto AVAudio3DMixingRenderingAlgorithm = 7 // AVAudio3DMixingRenderingAlgorithmEqualPowerPanning: An algorithm that pans the data of the mixer bus into a stereo field. AVAudio3DMixingRenderingAlgorithmEqualPowerPanning AVAudio3DMixingRenderingAlgorithm = 0 // AVAudio3DMixingRenderingAlgorithmHRTF: A high-quality algorithm that uses filtering to emulate 3D space in headphones. AVAudio3DMixingRenderingAlgorithmHRTF AVAudio3DMixingRenderingAlgorithm = 2 // AVAudio3DMixingRenderingAlgorithmHRTFHQ: A higher-quality head-related transfer function rendering algorithm. AVAudio3DMixingRenderingAlgorithmHRTFHQ AVAudio3DMixingRenderingAlgorithm = 6 // AVAudio3DMixingRenderingAlgorithmSoundField: An algorithm that renders to multichannel hardware. AVAudio3DMixingRenderingAlgorithmSoundField AVAudio3DMixingRenderingAlgorithm = 3 // AVAudio3DMixingRenderingAlgorithmSphericalHead: An algorithm that emulates 3D space in headphones by simulating interaural time delays and other spatial cues. AVAudio3DMixingRenderingAlgorithmSphericalHead AVAudio3DMixingRenderingAlgorithm = 1 // AVAudio3DMixingRenderingAlgorithmStereoPassThrough: An algorithm to use when the source data doesn’t need localization. AVAudio3DMixingRenderingAlgorithmStereoPassThrough AVAudio3DMixingRenderingAlgorithm = 5 )
func (AVAudio3DMixingRenderingAlgorithm) String ¶
func (e AVAudio3DMixingRenderingAlgorithm) String() string
type AVAudio3DMixingSourceMode ¶
type AVAudio3DMixingSourceMode int
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixingSourceMode
const ( // AVAudio3DMixingSourceModeAmbienceBed: The input channels spread around the listener as far-field sources that anchor to global space. AVAudio3DMixingSourceModeAmbienceBed AVAudio3DMixingSourceMode = 3 // AVAudio3DMixingSourceModeBypass: A mode that does no spatial rendering. AVAudio3DMixingSourceModeBypass AVAudio3DMixingSourceMode = 1 // AVAudio3DMixingSourceModePointSource: All channels of the bus render as a single source at the location of the source node. AVAudio3DMixingSourceModePointSource AVAudio3DMixingSourceMode = 2 // AVAudio3DMixingSourceModeSpatializeIfMono: A mono input bus that renders as a point source at the location of the source node. AVAudio3DMixingSourceModeSpatializeIfMono AVAudio3DMixingSourceMode = 0 )
func (AVAudio3DMixingSourceMode) String ¶
func (e AVAudio3DMixingSourceMode) String() string
type AVAudio3DPoint ¶
type AVAudio3DPoint struct {
X float32 // The location on the x-axis, in meters.
Y float32 // The location on the y-axis, in meters.
Z float32 // The location on the z-axis, in meters.
}
AVAudio3DPoint - A structure that represents a point in 3D space.
[Full Topic] [Full Topic]: https://developer.apple.com/documentation/AVFAudio/AVAudio3DPoint
type AVAudio3DVector ¶
type AVAudio3DVector = AVAudio3DPoint
AVAudio3DVector is a structure that represents a vector in 3D space, in degrees.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DVector
type AVAudio3DVectorOrientation ¶
type AVAudio3DVectorOrientation struct {
Forward AVAudio3DVector // The forward vector points in the direction that the listener faces.
Up AVAudio3DVector // The up vector is orthogonal to the forward vector and points upward from the listener’s head.
}
AVAudio3DVectorOrientation - A structure that represents two orthogonal vectors that describe the orientation of the listener in 3D space.
[Full Topic] [Full Topic]: https://developer.apple.com/documentation/AVFAudio/AVAudio3DVectorOrientation
type AVAudioApplication ¶
type AVAudioApplication struct {
objectivec.Object
}
An object that manages one or more audio sessions that belong to an app.
Overview ¶
Access the shared audio application instance to control app-level audio operations, such as requesting microphone permission and controlling audio input muting.
Requesting audio recording permission ¶
- AVAudioApplication.RecordPermission: The app’s permission to record audio.
Managing audio input mute state ¶
- AVAudioApplication.InputMuted: A Boolean value that indicates whether the app’s audio input is in a muted state.
- AVAudioApplication.SetInputMutedError: Sets a Boolean value that indicates whether the app’s audio input is in a muted state.
- AVAudioApplication.SetInputMuteStateChangeHandlerError: Sets a callback to handle changes to application-level audio muting states.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioApplication
func AVAudioApplicationFromID ¶
func AVAudioApplicationFromID(id objc.ID) AVAudioApplication
AVAudioApplicationFromID constructs a AVAudioApplication from an objc.ID.
An object that manages one or more audio sessions that belong to an app.
func NewAVAudioApplication ¶
func NewAVAudioApplication() AVAudioApplication
NewAVAudioApplication creates a new AVAudioApplication instance.
func (AVAudioApplication) Autorelease ¶
func (a AVAudioApplication) Autorelease() AVAudioApplication
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioApplication) Init ¶
func (a AVAudioApplication) Init() AVAudioApplication
Init initializes the instance.
func (AVAudioApplication) InputMuted ¶
func (a AVAudioApplication) InputMuted() bool
A Boolean value that indicates whether the app’s audio input is in a muted state.
Discussion ¶
Set a new value for this property by calling the [SetInputMutedError] method.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioApplication/isInputMuted
func (AVAudioApplication) RecordPermission ¶
func (a AVAudioApplication) RecordPermission() AVAudioApplicationRecordPermission
The app’s permission to record audio.
Discussion ¶
See [RequestRecordPermissionWithCompletionHandler] for more information.
func (AVAudioApplication) SetInputMuteStateChangeHandlerError ¶
func (a AVAudioApplication) SetInputMuteStateChangeHandlerError(inputMuteHandler func(bool) bool) (bool, error)
Sets a callback to handle changes to application-level audio muting states.
inputMuteHandler: A callback that the system invokes when the input mute state changes. If the callback receives a [true] value, mute all input audio samples until the next time the system calls the handler. Return a value of [true] if you muted input successfully, or in exceptional cases, return [false] to indicate the mute action fails. // [false]: https://developer.apple.com/documentation/Swift/false [true]: https://developer.apple.com/documentation/Swift/true
Discussion ¶
Use this method to set a closure to handle your macOS app’s input muting logic. The system calls thie closure when the input mute state changes, either due to setting the [InputMuted] state, or due to a Bluetooth audio accessory gesture (certain AirPods / Beats headphones) changing the mute state.
Since the input mute handling logic should happen a single place, subsequent calls to this method overwrite any previously registered block with the one you provide. You can specify a `nil` to cancel the callback.
func (AVAudioApplication) SetInputMutedError ¶
func (a AVAudioApplication) SetInputMutedError(muted bool) (bool, error)
Sets a Boolean value that indicates whether the app’s audio input is in a muted state.
muted: A Boolean value that indicates the new mute state.
Discussion ¶
In platforms that use AVAudioSession, setting the value to true mutes all sources of audio input in the app. In macOS, the system instead invokes the callback that you register by calling [SetInputMuteStateChangeHandlerError] to handle input muting.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioApplication/setInputMuted(_:)
type AVAudioApplicationClass ¶
type AVAudioApplicationClass struct {
// contains filtered or unexported fields
}
func GetAVAudioApplicationClass ¶
func GetAVAudioApplicationClass() AVAudioApplicationClass
GetAVAudioApplicationClass returns the class object for AVAudioApplication.
func (AVAudioApplicationClass) Alloc ¶
func (ac AVAudioApplicationClass) Alloc() AVAudioApplication
Alloc allocates memory for a new instance of the class.
func (AVAudioApplicationClass) Class ¶
func (ac AVAudioApplicationClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
func (AVAudioApplicationClass) RequestRecordPermission ¶
func (ac AVAudioApplicationClass) RequestRecordPermission(ctx context.Context) (bool, error)
RequestRecordPermission is a synchronous wrapper around [AVAudioApplication.RequestRecordPermissionWithCompletionHandler]. It blocks until the completion handler fires or the context is cancelled.
func (AVAudioApplicationClass) RequestRecordPermissionWithCompletionHandler ¶
func (_AVAudioApplicationClass AVAudioApplicationClass) RequestRecordPermissionWithCompletionHandler(response BoolHandler)
Determines whether the app has permission to record audio.
response: A Boolean value that indicates whether the user grants the app permission to record audio.
Discussion ¶
Recording audio requires explicit permission from the user. The first time your app attempts to record audio input, the system automatically prompts the user for permission. You can also explicitly ask for permission by calling this method. This method returns immediately, but the system waits for user input if the user hasn’t previously granted or denied recording permission.
Unless a user grants your app permission to record audio, it captures only silence (zeroed out audio samples).
After a user responds to a recording permission prompt from your app, the system remembers their choice and won’t prompt them again. If a user denies the app recording permission, they can grant it access in the Privacy & Security section of the Settings app.
func (AVAudioApplicationClass) SharedInstance ¶
func (_AVAudioApplicationClass AVAudioApplicationClass) SharedInstance() AVAudioApplication
Accesses the shared audio application instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioApplication/shared
type AVAudioApplicationMicrophoneInjectionPermission ¶
type AVAudioApplicationMicrophoneInjectionPermission int
const ( // AVAudioApplicationMicrophoneInjectionPermissionDenied: A person denies the app permission to add audio to calls. AVAudioApplicationMicrophoneInjectionPermissionDenied AVAudioApplicationMicrophoneInjectionPermission = 'd'<<24 | 'e'<<16 | 'n'<<8 | 'y' // 'deny' // AVAudioApplicationMicrophoneInjectionPermissionGranted: A person grants the app permission to add audio to calls. AVAudioApplicationMicrophoneInjectionPermissionGranted AVAudioApplicationMicrophoneInjectionPermission = 'g'<<24 | 'r'<<16 | 'n'<<8 | 't' // 'grnt' // AVAudioApplicationMicrophoneInjectionPermissionServiceDisabled: A person disables this service for all apps. AVAudioApplicationMicrophoneInjectionPermissionServiceDisabled AVAudioApplicationMicrophoneInjectionPermission = 's'<<24 | 'r'<<16 | 'd'<<8 | 's' // 'srds' // AVAudioApplicationMicrophoneInjectionPermissionUndetermined: The app hasn’t requested a person’s permission to add audio to calls. AVAudioApplicationMicrophoneInjectionPermissionUndetermined AVAudioApplicationMicrophoneInjectionPermission = 'u'<<24 | 'n'<<16 | 'd'<<8 | 't' // 'undt' )
func (AVAudioApplicationMicrophoneInjectionPermission) String ¶
func (e AVAudioApplicationMicrophoneInjectionPermission) String() string
type AVAudioApplicationMicrophoneInjectionPermissionHandler ¶
type AVAudioApplicationMicrophoneInjectionPermissionHandler = func(AVAudioApplicationMicrophoneInjectionPermission)
AVAudioApplicationMicrophoneInjectionPermissionHandler handles completion with a primitive value.
Used by:
- [AVAudioApplication.RequestMicrophoneInjectionPermissionWithCompletionHandler]
type AVAudioApplicationRecordPermission ¶
type AVAudioApplicationRecordPermission int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioApplication/recordPermission-swift.enum
const ( // AVAudioApplicationRecordPermissionDenied: Indicates the user denies the app permission to record audio. AVAudioApplicationRecordPermissionDenied AVAudioApplicationRecordPermission = 'd'<<24 | 'e'<<16 | 'n'<<8 | 'y' // 'deny' // AVAudioApplicationRecordPermissionGranted: Indicates the user grants the app permission to record audio. AVAudioApplicationRecordPermissionGranted AVAudioApplicationRecordPermission = 'g'<<24 | 'r'<<16 | 'n'<<8 | 't' // 'grnt' // AVAudioApplicationRecordPermissionUndetermined: Indicates the app hasn’t requested recording permission. AVAudioApplicationRecordPermissionUndetermined AVAudioApplicationRecordPermission = 'u'<<24 | 'n'<<16 | 'd'<<8 | 't' // 'undt' )
func (AVAudioApplicationRecordPermission) String ¶
func (e AVAudioApplicationRecordPermission) String() string
type AVAudioBuffer ¶
type AVAudioBuffer struct {
objectivec.Object
}
An object that represents a buffer of audio data with a format.
Getting the Buffer Format ¶
- AVAudioBuffer.Format: The format of the audio in the buffer.
Getting the Audio Buffers ¶
- AVAudioBuffer.AudioBufferList: The buffer’s underlying audio buffer list.
- AVAudioBuffer.MutableAudioBufferList: A mutable version of the buffer’s underlying audio buffer list.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioBuffer
func AVAudioBufferFromID ¶
func AVAudioBufferFromID(id objc.ID) AVAudioBuffer
AVAudioBufferFromID constructs a AVAudioBuffer from an objc.ID.
An object that represents a buffer of audio data with a format.
func NewAVAudioBuffer ¶
func NewAVAudioBuffer() AVAudioBuffer
NewAVAudioBuffer creates a new AVAudioBuffer instance.
func (AVAudioBuffer) AudioBufferList ¶
func (a AVAudioBuffer) AudioBufferList() objectivec.IObject
The buffer’s underlying audio buffer list.
Discussion ¶
A buffer list is a variable length array that contains an array of audio buffer instances. You use it with lower-level Core Audio and Audio Toolbox API.
You must not modify the buffer list structure, although you can modify buffer contents.
The `mDataByteSize` fields of this audio buffer list express the buffer’s current [FrameLength].
See: https://developer.apple.com/documentation/AVFAudio/AVAudioBuffer/audioBufferList
func (AVAudioBuffer) Autorelease ¶
func (a AVAudioBuffer) Autorelease() AVAudioBuffer
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioBuffer) Format ¶
func (a AVAudioBuffer) Format() IAVAudioFormat
The format of the audio in the buffer.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioBuffer/format
func (AVAudioBuffer) Init ¶
func (a AVAudioBuffer) Init() AVAudioBuffer
Init initializes the instance.
func (AVAudioBuffer) MutableAudioBufferList ¶
func (a AVAudioBuffer) MutableAudioBufferList() objectivec.IObject
A mutable version of the buffer’s underlying audio buffer list.
Discussion ¶
You use this with some lower-level Core Audio and Audio Toolbox APIs that require a mutable AudioBufferList (for example, the AudioConverterConvertComplexBuffer(_:_:_:_:) function).
The `mDataByteSize` fields of this audio buffer list express the buffer’s current [FrameCapacity]. If you alter the capacity, modify the buffer’s `frameLength` to match.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioBuffer/mutableAudioBufferList
type AVAudioBufferClass ¶
type AVAudioBufferClass struct {
// contains filtered or unexported fields
}
func GetAVAudioBufferClass ¶
func GetAVAudioBufferClass() AVAudioBufferClass
GetAVAudioBufferClass returns the class object for AVAudioBuffer.
func (AVAudioBufferClass) Alloc ¶
func (ac AVAudioBufferClass) Alloc() AVAudioBuffer
Alloc allocates memory for a new instance of the class.
func (AVAudioBufferClass) Class ¶
func (ac AVAudioBufferClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioChannelCount ¶
type AVAudioChannelCount = uint32
AVAudioChannelCount is the number of audio channels.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioChannelCount
type AVAudioChannelLayout ¶
type AVAudioChannelLayout struct {
objectivec.Object
}
An object that describes the roles of a set of audio channels.
Overview ¶
The AVAudioChannelLayout class is a thin wrapper for Core Audio’s AudioChannelLayout.
Creating an Audio Channel Layout ¶
- AVAudioChannelLayout.InitWithLayout: Creates an audio channel layout object from an existing one.
- AVAudioChannelLayout.InitWithLayoutTag: Creates an audio channel layout object from a layout tag.
Getting Audio Channel Layout Properties ¶
- AVAudioChannelLayout.ChannelCount: The number of channels of audio data.
- AVAudioChannelLayout.Layout: The underlying audio channel layout.
- AVAudioChannelLayout.LayoutTag: The audio channel’s underlying layout tag.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioChannelLayout
func AVAudioChannelLayoutFromID ¶
func AVAudioChannelLayoutFromID(id objc.ID) AVAudioChannelLayout
AVAudioChannelLayoutFromID constructs a AVAudioChannelLayout from an objc.ID.
An object that describes the roles of a set of audio channels.
func NewAVAudioChannelLayout ¶
func NewAVAudioChannelLayout() AVAudioChannelLayout
NewAVAudioChannelLayout creates a new AVAudioChannelLayout instance.
func NewAudioChannelLayoutWithLayout ¶
func NewAudioChannelLayoutWithLayout(layout IAVAudioChannelLayout) AVAudioChannelLayout
Creates an audio channel layout object from an existing one.
layout: The existing audio channel layout object.
Return Value ¶
A new AVAudioChannelLayout object.
Discussion ¶
If the audio channel layout object’s tag is kAudioChannelLayoutTag_UseChannelDescriptions, this initializer attempts to convert it to a more specific tag.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioChannelLayout/init(layout:)
func NewAudioChannelLayoutWithLayoutTag ¶
func NewAudioChannelLayoutWithLayoutTag(layoutTag objectivec.IObject) AVAudioChannelLayout
Creates an audio channel layout object from a layout tag.
layoutTag: The audio channel layout tag.
Return Value ¶
A new AVAudioChannelLayout object, or `nil` if `layoutTag` is kAudioChannelLayoutTag_UseChannelDescriptions or kAudioChannelLayoutTag_UseChannelBitmap.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioChannelLayout/init(layoutTag:) layoutTag is a [coreaudiotypes.AudioChannelLayoutTag].
func (AVAudioChannelLayout) AVChannelLayoutKey ¶
func (a AVAudioChannelLayout) AVChannelLayoutKey() string
See: https://developer.apple.com/documentation/avfaudio/avchannellayoutkey
func (AVAudioChannelLayout) Autorelease ¶
func (a AVAudioChannelLayout) Autorelease() AVAudioChannelLayout
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioChannelLayout) ChannelCount ¶
func (a AVAudioChannelLayout) ChannelCount() AVAudioChannelCount
The number of channels of audio data.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioChannelLayout/channelCount
func (AVAudioChannelLayout) EncodeWithCoder ¶
func (a AVAudioChannelLayout) EncodeWithCoder(coder foundation.INSCoder)
func (AVAudioChannelLayout) Init ¶
func (a AVAudioChannelLayout) Init() AVAudioChannelLayout
Init initializes the instance.
func (AVAudioChannelLayout) InitWithLayout ¶
func (a AVAudioChannelLayout) InitWithLayout(layout IAVAudioChannelLayout) AVAudioChannelLayout
Creates an audio channel layout object from an existing one.
layout: The existing audio channel layout object.
Return Value ¶
A new AVAudioChannelLayout object.
Discussion ¶
If the audio channel layout object’s tag is kAudioChannelLayoutTag_UseChannelDescriptions, this initializer attempts to convert it to a more specific tag.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioChannelLayout/init(layout:)
func (AVAudioChannelLayout) InitWithLayoutTag ¶
func (a AVAudioChannelLayout) InitWithLayoutTag(layoutTag objectivec.IObject) AVAudioChannelLayout
Creates an audio channel layout object from a layout tag.
layoutTag: The audio channel layout tag.
layoutTag is a [coreaudiotypes.AudioChannelLayoutTag].
Return Value ¶
A new AVAudioChannelLayout object, or `nil` if `layoutTag` is kAudioChannelLayoutTag_UseChannelDescriptions or kAudioChannelLayoutTag_UseChannelBitmap.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioChannelLayout/init(layoutTag:) layoutTag is a [coreaudiotypes.AudioChannelLayoutTag].
func (AVAudioChannelLayout) Layout ¶
func (a AVAudioChannelLayout) Layout() IAVAudioChannelLayout
The underlying audio channel layout.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioChannelLayout/layout
func (AVAudioChannelLayout) LayoutTag ¶
func (a AVAudioChannelLayout) LayoutTag() objectivec.IObject
The audio channel’s underlying layout tag.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioChannelLayout/layoutTag
type AVAudioChannelLayoutClass ¶
type AVAudioChannelLayoutClass struct {
// contains filtered or unexported fields
}
func GetAVAudioChannelLayoutClass ¶
func GetAVAudioChannelLayoutClass() AVAudioChannelLayoutClass
GetAVAudioChannelLayoutClass returns the class object for AVAudioChannelLayout.
func (AVAudioChannelLayoutClass) Alloc ¶
func (ac AVAudioChannelLayoutClass) Alloc() AVAudioChannelLayout
Alloc allocates memory for a new instance of the class.
func (AVAudioChannelLayoutClass) Class ¶
func (ac AVAudioChannelLayoutClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
func (AVAudioChannelLayoutClass) LayoutWithLayout ¶
func (_AVAudioChannelLayoutClass AVAudioChannelLayoutClass) LayoutWithLayout(layout IAVAudioChannelLayout) AVAudioChannelLayout
Creates an audio channel layout object from an existing one.
layout: The existing audio channel layout object.
Return Value ¶
A new AVAudioChannelLayout object.
Discussion ¶
If the layout’s tag is kAudioChannelLayoutTag_UseChannelDescriptions, the method attempts to convert it to a more specific tag.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioChannelLayout/layoutWithLayout:
func (AVAudioChannelLayoutClass) LayoutWithLayoutTag ¶
func (_AVAudioChannelLayoutClass AVAudioChannelLayoutClass) LayoutWithLayoutTag(layoutTag objectivec.IObject) AVAudioChannelLayout
Creates an audio channel layout object from an audio channel layout tag.
layoutTag: The audio channel layout tag.
layoutTag is a [coreaudiotypes.AudioChannelLayoutTag].
Return Value ¶
A new AVAudioChannelLayout object.
Discussion ¶
If the provided audio channel layout object’s tag is kAudioChannelLayoutTag_UseChannelDescriptions, this initializer attempts to convert it to a more specific tag.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioChannelLayout/layoutWithLayoutTag: layoutTag is a [coreaudiotypes.AudioChannelLayoutTag].
type AVAudioCommonFormat ¶
type AVAudioCommonFormat uint
See: https://developer.apple.com/documentation/AVFAudio/AVAudioCommonFormat
const ( // AVAudioOtherFormat: A format other than one the enumeration specifies. AVAudioOtherFormat AVAudioCommonFormat = 0 // AVAudioPCMFormatFloat32: A format that represents the standard format as native-endian floats. AVAudioPCMFormatFloat32 AVAudioCommonFormat = 1 // AVAudioPCMFormatFloat64: A format that represents native-endian doubles. AVAudioPCMFormatFloat64 AVAudioCommonFormat = 2 // AVAudioPCMFormatInt16: A format that represents signed 16-bit native-endian integers. AVAudioPCMFormatInt16 AVAudioCommonFormat = 3 // AVAudioPCMFormatInt32: A format that represents signed 32-bit native-endian integers. AVAudioPCMFormatInt32 AVAudioCommonFormat = 4 )
func (AVAudioCommonFormat) String ¶
func (e AVAudioCommonFormat) String() string
type AVAudioCompressedBuffer ¶
type AVAudioCompressedBuffer struct {
AVAudioBuffer
}
An object that represents an audio buffer that you use for compressed audio formats.
Creating an Audio Buffer ¶
- AVAudioCompressedBuffer.InitWithFormatPacketCapacity: Creates a buffer that contains constant bytes per packet of audio data in a compressed state.
- AVAudioCompressedBuffer.InitWithFormatPacketCapacityMaximumPacketSize: Creates a buffer that contains audio data in a compressed state.
Getting Audio Buffer Properties ¶
- AVAudioCompressedBuffer.ByteCapacity: The number of packets the buffer contains.
- AVAudioCompressedBuffer.ByteLength: The number of valid bytes in the buffer.
- AVAudioCompressedBuffer.SetByteLength
- AVAudioCompressedBuffer.Data: The audio buffer’s data bytes.
- AVAudioCompressedBuffer.MaximumPacketSize: The maximum size of a packet, in bytes.
- AVAudioCompressedBuffer.PacketCapacity: The total number of packets that the buffer can contain.
- AVAudioCompressedBuffer.PacketCount: The number of packets currently in the buffer.
- AVAudioCompressedBuffer.SetPacketCount
- AVAudioCompressedBuffer.PacketDescriptions: The buffer’s array of packet descriptions.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioCompressedBuffer
func AVAudioCompressedBufferFromID ¶
func AVAudioCompressedBufferFromID(id objc.ID) AVAudioCompressedBuffer
AVAudioCompressedBufferFromID constructs a AVAudioCompressedBuffer from an objc.ID.
An object that represents an audio buffer that you use for compressed audio formats.
func NewAVAudioCompressedBuffer ¶
func NewAVAudioCompressedBuffer() AVAudioCompressedBuffer
NewAVAudioCompressedBuffer creates a new AVAudioCompressedBuffer instance.
func NewAudioCompressedBufferWithFormatPacketCapacity ¶
func NewAudioCompressedBufferWithFormatPacketCapacity(format IAVAudioFormat, packetCapacity AVAudioPacketCount) AVAudioCompressedBuffer
Creates a buffer that contains constant bytes per packet of audio data in a compressed state.
format: The format of the audio the buffer contains.
packetCapacity: The capacity of the buffer, in packets.
Return Value ¶
A new AVAudioCompressedBuffer instance.
Discussion ¶
This fails if the format is PCM or if the format has variable bytes per packet (for example, `format.StreamDescription()->mBytesPerPacket == 0`).
func NewAudioCompressedBufferWithFormatPacketCapacityMaximumPacketSize ¶
func NewAudioCompressedBufferWithFormatPacketCapacityMaximumPacketSize(format IAVAudioFormat, packetCapacity AVAudioPacketCount, maximumPacketSize int) AVAudioCompressedBuffer
Creates a buffer that contains audio data in a compressed state.
format: The format of the audio the buffer contains.
packetCapacity: The capacity of the buffer, in packets.
maximumPacketSize: The maximum size in bytes of a packet in a compressed state.
Return Value ¶
A new AVAudioCompressedBuffer instance.
Discussion ¶
You can obtain the maximum packet size from the [MaximumOutputPacketSize] property of an AVAudioConverter you configure for encoding this format.
The method raises an exception if the format is PCM.
func (AVAudioCompressedBuffer) Autorelease ¶
func (a AVAudioCompressedBuffer) Autorelease() AVAudioCompressedBuffer
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioCompressedBuffer) ByteCapacity ¶
func (a AVAudioCompressedBuffer) ByteCapacity() uint32
The number of packets the buffer contains.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioCompressedBuffer/byteCapacity
func (AVAudioCompressedBuffer) ByteLength ¶
func (a AVAudioCompressedBuffer) ByteLength() uint32
The number of valid bytes in the buffer.
Discussion ¶
You can change this value as part of an operation that modifies the contents.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioCompressedBuffer/byteLength
func (AVAudioCompressedBuffer) Data ¶
func (a AVAudioCompressedBuffer) Data() unsafe.Pointer
The audio buffer’s data bytes.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioCompressedBuffer/data
func (AVAudioCompressedBuffer) Init ¶
func (a AVAudioCompressedBuffer) Init() AVAudioCompressedBuffer
Init initializes the instance.
func (AVAudioCompressedBuffer) InitWithFormatPacketCapacity ¶
func (a AVAudioCompressedBuffer) InitWithFormatPacketCapacity(format IAVAudioFormat, packetCapacity AVAudioPacketCount) AVAudioCompressedBuffer
Creates a buffer that contains constant bytes per packet of audio data in a compressed state.
format: The format of the audio the buffer contains.
packetCapacity: The capacity of the buffer, in packets.
Return Value ¶
A new AVAudioCompressedBuffer instance.
Discussion ¶
This fails if the format is PCM or if the format has variable bytes per packet (for example, `format.StreamDescription()->mBytesPerPacket == 0`).
func (AVAudioCompressedBuffer) InitWithFormatPacketCapacityMaximumPacketSize ¶
func (a AVAudioCompressedBuffer) InitWithFormatPacketCapacityMaximumPacketSize(format IAVAudioFormat, packetCapacity AVAudioPacketCount, maximumPacketSize int) AVAudioCompressedBuffer
Creates a buffer that contains audio data in a compressed state.
format: The format of the audio the buffer contains.
packetCapacity: The capacity of the buffer, in packets.
maximumPacketSize: The maximum size in bytes of a packet in a compressed state.
Return Value ¶
A new AVAudioCompressedBuffer instance.
Discussion ¶
You can obtain the maximum packet size from the [MaximumOutputPacketSize] property of an AVAudioConverter you configure for encoding this format.
The method raises an exception if the format is PCM.
func (AVAudioCompressedBuffer) MaximumPacketSize ¶
func (a AVAudioCompressedBuffer) MaximumPacketSize() int
The maximum size of a packet, in bytes.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioCompressedBuffer/maximumPacketSize
func (AVAudioCompressedBuffer) PacketCapacity ¶
func (a AVAudioCompressedBuffer) PacketCapacity() AVAudioPacketCount
The total number of packets that the buffer can contain.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioCompressedBuffer/packetCapacity
func (AVAudioCompressedBuffer) PacketCount ¶
func (a AVAudioCompressedBuffer) PacketCount() AVAudioPacketCount
The number of packets currently in the buffer.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioCompressedBuffer/packetCount
func (AVAudioCompressedBuffer) PacketDependencies ¶
func (a AVAudioCompressedBuffer) PacketDependencies() objectivec.IObject
The buffer’s array of packet dependencies.
Discussion ¶
If the audio format doesn’t use packet dependencies, this value is `nil`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioCompressedBuffer/packetDependencies-5oae6
func (AVAudioCompressedBuffer) PacketDescriptions ¶
func (a AVAudioCompressedBuffer) PacketDescriptions() objectivec.IObject
The buffer’s array of packet descriptions.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioCompressedBuffer/packetDescriptions
func (AVAudioCompressedBuffer) SetByteLength ¶
func (a AVAudioCompressedBuffer) SetByteLength(value uint32)
func (AVAudioCompressedBuffer) SetPacketCount ¶
func (a AVAudioCompressedBuffer) SetPacketCount(value AVAudioPacketCount)
type AVAudioCompressedBufferClass ¶
type AVAudioCompressedBufferClass struct {
// contains filtered or unexported fields
}
func GetAVAudioCompressedBufferClass ¶
func GetAVAudioCompressedBufferClass() AVAudioCompressedBufferClass
GetAVAudioCompressedBufferClass returns the class object for AVAudioCompressedBuffer.
func (AVAudioCompressedBufferClass) Alloc ¶
func (ac AVAudioCompressedBufferClass) Alloc() AVAudioCompressedBuffer
Alloc allocates memory for a new instance of the class.
func (AVAudioCompressedBufferClass) Class ¶
func (ac AVAudioCompressedBufferClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioConnectionPoint ¶
type AVAudioConnectionPoint struct {
objectivec.Object
}
A representation of either a source or destination connection point in the audio engine.
Overview ¶
Instances of this class are immutable.
Creating a Connection Point ¶
- AVAudioConnectionPoint.InitWithNodeBus: Creates a connection point object.
Getting Connection Point Properties ¶
- AVAudioConnectionPoint.InputConnectionPointForNodeInputBus: Returns connection information about a node’s input bus.
- AVAudioConnectionPoint.OutputConnectionPointsForNodeOutputBus: Returns connection information about a node’s output bus.
- AVAudioConnectionPoint.Bus: The bus on the node in the connection point.
- AVAudioConnectionPoint.Node: The node in the connection point.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConnectionPoint
func AVAudioConnectionPointFromID ¶
func AVAudioConnectionPointFromID(id objc.ID) AVAudioConnectionPoint
AVAudioConnectionPointFromID constructs a AVAudioConnectionPoint from an objc.ID.
A representation of either a source or destination connection point in the audio engine.
func NewAVAudioConnectionPoint ¶
func NewAVAudioConnectionPoint() AVAudioConnectionPoint
NewAVAudioConnectionPoint creates a new AVAudioConnectionPoint instance.
func NewAudioConnectionPointWithNodeBus ¶
func NewAudioConnectionPointWithNodeBus(node IAVAudioNode, bus AVAudioNodeBus) AVAudioConnectionPoint
Creates a connection point object.
node: The source or destination node.
bus: The output or input bus on the node.
Discussion ¶
If the node is `nil`, this method fails and returns `nil`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConnectionPoint/init(node:bus:)
func (AVAudioConnectionPoint) Autorelease ¶
func (a AVAudioConnectionPoint) Autorelease() AVAudioConnectionPoint
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioConnectionPoint) Bus ¶
func (a AVAudioConnectionPoint) Bus() AVAudioNodeBus
The bus on the node in the connection point.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConnectionPoint/bus
func (AVAudioConnectionPoint) Init ¶
func (a AVAudioConnectionPoint) Init() AVAudioConnectionPoint
Init initializes the instance.
func (AVAudioConnectionPoint) InitWithNodeBus ¶
func (a AVAudioConnectionPoint) InitWithNodeBus(node IAVAudioNode, bus AVAudioNodeBus) AVAudioConnectionPoint
Creates a connection point object.
node: The source or destination node.
bus: The output or input bus on the node.
Discussion ¶
If the node is `nil`, this method fails and returns `nil`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConnectionPoint/init(node:bus:)
func (AVAudioConnectionPoint) InputConnectionPointForNodeInputBus ¶
func (a AVAudioConnectionPoint) InputConnectionPointForNodeInputBus(node IAVAudioNode, bus AVAudioNodeBus) IAVAudioConnectionPoint
Returns connection information about a node’s input bus.
node: The node with the input connection you’re querying.
bus: The node’s input bus for the connection you’re querying.
Return Value ¶
An AVAudioConnectionPoint object with connection information on the node’s input bus.
Discussion ¶
Connections are always one-to-one or one-to-many. This method returns `nil` if there’s no connection on the node’s specified input bus.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/inputConnectionPoint(for:inputBus:)
func (AVAudioConnectionPoint) Node ¶
func (a AVAudioConnectionPoint) Node() IAVAudioNode
The node in the connection point.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConnectionPoint/node
func (AVAudioConnectionPoint) OutputConnectionPointsForNodeOutputBus ¶
func (a AVAudioConnectionPoint) OutputConnectionPointsForNodeOutputBus(node IAVAudioNode, bus AVAudioNodeBus) []AVAudioConnectionPoint
Returns connection information about a node’s output bus.
node: The node with the output connections you’re querying.
bus: The node’s output bus for connections you’re querying.
Return Value ¶
An array of AVAudioConnectionPoint objects with connection information on the node’s output bus.
Discussion ¶
Connections are always one-to-one or one-to-many. This method returns an empty array if there are no connections on the node’s specified output bus.
type AVAudioConnectionPointClass ¶
type AVAudioConnectionPointClass struct {
// contains filtered or unexported fields
}
func GetAVAudioConnectionPointClass ¶
func GetAVAudioConnectionPointClass() AVAudioConnectionPointClass
GetAVAudioConnectionPointClass returns the class object for AVAudioConnectionPoint.
func (AVAudioConnectionPointClass) Alloc ¶
func (ac AVAudioConnectionPointClass) Alloc() AVAudioConnectionPoint
Alloc allocates memory for a new instance of the class.
func (AVAudioConnectionPointClass) Class ¶
func (ac AVAudioConnectionPointClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioContentSource ¶
type AVAudioContentSource int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioContentSource
const ( AVAudioContentSource_AV_Spatial_Live AVAudioContentSource = 41 AVAudioContentSource_AV_Spatial_Offline AVAudioContentSource = 39 AVAudioContentSource_AV_Traditional_Live AVAudioContentSource = 40 AVAudioContentSource_AV_Traditional_Offline AVAudioContentSource = 38 AVAudioContentSource_AppleAV_Spatial_Live AVAudioContentSource = 9 AVAudioContentSource_AppleAV_Spatial_Offline AVAudioContentSource = 7 AVAudioContentSource_AppleAV_Traditional_Live AVAudioContentSource = 8 AVAudioContentSource_AppleAV_Traditional_Offline AVAudioContentSource = 6 AVAudioContentSource_AppleCapture_Spatial AVAudioContentSource = 2 AVAudioContentSource_AppleCapture_Spatial_Enhanced AVAudioContentSource = 3 AVAudioContentSource_AppleCapture_Traditional AVAudioContentSource = 1 AVAudioContentSource_AppleMusic_Spatial AVAudioContentSource = 5 AVAudioContentSource_AppleMusic_Traditional AVAudioContentSource = 4 AVAudioContentSource_ApplePassthrough AVAudioContentSource = 10 AVAudioContentSource_Capture_Spatial AVAudioContentSource = 34 AVAudioContentSource_Capture_Spatial_Enhanced AVAudioContentSource = 35 AVAudioContentSource_Capture_Traditional AVAudioContentSource = 33 AVAudioContentSource_Music_Spatial AVAudioContentSource = 37 AVAudioContentSource_Music_Traditional AVAudioContentSource = 36 AVAudioContentSource_Passthrough AVAudioContentSource = 42 AVAudioContentSource_Reserved AVAudioContentSource = 0 AVAudioContentSource_Unspecified AVAudioContentSource = -1 )
func (AVAudioContentSource) String ¶
func (e AVAudioContentSource) String() string
type AVAudioConverter ¶
type AVAudioConverter struct {
objectivec.Object
}
An object that converts streams of audio between formats.
Overview ¶
The audio converter class transforms audio between file formats and audio encodings.
Supported transformations include:
- PCM float, integer, or bit depth conversions - PCM sample rate conversion - PCM interleaving and deinterleaving - Encoding PCM to compressed formats - Decoding compressed formats to PCM
A single audio converter instance may perform more than one of the above transformations.
Creating an Audio Converter ¶
- AVAudioConverter.InitFromFormatToFormat: Creates an audio converter object from the specified input and output formats.
Converting Audio Formats ¶
- AVAudioConverter.ConvertToBufferErrorWithInputFromBlock: Performs a conversion between audio formats, if the system supports it.
- AVAudioConverter.ConvertToBufferFromBufferError: Performs a basic conversion between audio formats that doesn’t involve converting codecs or sample rates.
Resetting an Audio Converter ¶
- AVAudioConverter.Reset: Resets the converter so you can convert a new audio stream.
Getting Audio Converter Properties ¶
- AVAudioConverter.ChannelMap: An array of integers that indicates which input to derive each output from.
- AVAudioConverter.SetChannelMap
- AVAudioConverter.Dither: A Boolean value that indicates whether dither is on.
- AVAudioConverter.SetDither
- AVAudioConverter.Downmix: A Boolean value that indicates whether the framework mixes the channels instead of remapping.
- AVAudioConverter.SetDownmix
- AVAudioConverter.InputFormat: The format of the input audio stream.
- AVAudioConverter.OutputFormat: The format of the output audio stream.
- AVAudioConverter.MagicCookie: An object that contains metadata for encoders and decoders.
- AVAudioConverter.SetMagicCookie
- AVAudioConverter.MaximumOutputPacketSize: The maximum size of an output packet, in bytes.
Getting Bit Rate Properties ¶
- AVAudioConverter.ApplicableEncodeBitRates: An array of bit rates the framework applies during encoding according to the current formats and settings.
- AVAudioConverter.AvailableEncodeBitRates: An array of all bit rates the codec provides when encoding.
- AVAudioConverter.AvailableEncodeChannelLayoutTags: An array of all output channel layout tags the codec provides when encoding.
- AVAudioConverter.BitRate: The bit rate, in bits per second.
- AVAudioConverter.SetBitRate
- AVAudioConverter.BitRateStrategy: A key value constant the framework uses during encoding.
- AVAudioConverter.SetBitRateStrategy
Getting Sample Rate Properties ¶
- AVAudioConverter.SampleRateConverterQuality: A sample rate converter algorithm key value.
- AVAudioConverter.SetSampleRateConverterQuality
- AVAudioConverter.SampleRateConverterAlgorithm: The priming method the sample rate converter or decoder uses.
- AVAudioConverter.SetSampleRateConverterAlgorithm
- AVAudioConverter.ApplicableEncodeSampleRates: An array of output sample rates that the converter applies according to the current formats and settings, when encoding.
- AVAudioConverter.AvailableEncodeSampleRates: An array of all output sample rates the codec provides when encoding.
Getting Priming Information ¶
- AVAudioConverter.PrimeInfo: The number of priming frames the converter uses.
- AVAudioConverter.SetPrimeInfo
- AVAudioConverter.PrimeMethod: The priming method the sample rate converter or decoder uses.
- AVAudioConverter.SetPrimeMethod
Managing packet dependencies ¶
- AVAudioConverter.AudioSyncPacketFrequency
- AVAudioConverter.SetAudioSyncPacketFrequency
- AVAudioConverter.ContentSource
- AVAudioConverter.SetContentSource
- AVAudioConverter.DynamicRangeControlConfiguration
- AVAudioConverter.SetDynamicRangeControlConfiguration
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter
func AVAudioConverterFromID ¶
func AVAudioConverterFromID(id objc.ID) AVAudioConverter
AVAudioConverterFromID constructs a AVAudioConverter from an objc.ID.
An object that converts streams of audio between formats.
func NewAVAudioConverter ¶
func NewAVAudioConverter() AVAudioConverter
NewAVAudioConverter creates a new AVAudioConverter instance.
func NewAudioConverterFromFormatToFormat ¶
func NewAudioConverterFromFormatToFormat(fromFormat IAVAudioFormat, toFormat IAVAudioFormat) AVAudioConverter
Creates an audio converter object from the specified input and output formats.
fromFormat: The input audio format.
toFormat: The audio format to convert to.
Return Value ¶
An AVAudioConverter instance, or `nil` if the format conversion isn’t possible.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/init(from:to:)
func (AVAudioConverter) ApplicableEncodeBitRates ¶
func (a AVAudioConverter) ApplicableEncodeBitRates() []foundation.NSNumber
An array of bit rates the framework applies during encoding according to the current formats and settings.
Discussion ¶
This property returns `nil` if you’re not encoding.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/applicableEncodeBitRates
func (AVAudioConverter) ApplicableEncodeSampleRates ¶
func (a AVAudioConverter) ApplicableEncodeSampleRates() []foundation.NSNumber
An array of output sample rates that the converter applies according to the current formats and settings, when encoding.
Discussion ¶
This property returns `nil` if you’re not encoding.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/applicableEncodeSampleRates
func (AVAudioConverter) AudioSyncPacketFrequency ¶
func (a AVAudioConverter) AudioSyncPacketFrequency() int
Discussion ¶
Number of packets between consecutive sync packets.
A sync packet is an independently-decodable packet that completely refreshes the decoder without needing to decode other packets. When compressing to a format which supports it (such as APAC), the audio sync packet frequency indicates the distance in packets between two sync packets, with non-sync packets between. This is useful to set when saving compressed packets to a file and efficient random access is desired. Note: Separating sync packets by at least one second of encoded audio (e.g. 75 packets) is recommended.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/audioSyncPacketFrequency
func (AVAudioConverter) Autorelease ¶
func (a AVAudioConverter) Autorelease() AVAudioConverter
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioConverter) AvailableEncodeBitRates ¶
func (a AVAudioConverter) AvailableEncodeBitRates() []foundation.NSNumber
An array of all bit rates the codec provides when encoding.
Discussion ¶
This property returns `nil` if you’re not encoding.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/availableEncodeBitRates
func (AVAudioConverter) AvailableEncodeChannelLayoutTags ¶
func (a AVAudioConverter) AvailableEncodeChannelLayoutTags() []foundation.NSNumber
An array of all output channel layout tags the codec provides when encoding.
Discussion ¶
This property returns `nil` if you’re not encoding.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/availableEncodeChannelLayoutTags
func (AVAudioConverter) AvailableEncodeSampleRates ¶
func (a AVAudioConverter) AvailableEncodeSampleRates() []foundation.NSNumber
An array of all output sample rates the codec provides when encoding.
Discussion ¶
This property returns `nil` if you’re not encoding.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/availableEncodeSampleRates
func (AVAudioConverter) BitRate ¶
func (a AVAudioConverter) BitRate() int
The bit rate, in bits per second.
Discussion ¶
This value only applies when encoding.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/bitRate
func (AVAudioConverter) BitRateStrategy ¶
func (a AVAudioConverter) BitRateStrategy() string
A key value constant the framework uses during encoding.
Discussion ¶
This property returns `nil` if you’re not encoding. For information about possible values, see AVEncoderBitRateStrategyKey.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/bitRateStrategy
func (AVAudioConverter) ChannelMap ¶
func (a AVAudioConverter) ChannelMap() []foundation.NSNumber
An array of integers that indicates which input to derive each output from.
Discussion ¶
The array size equals the number of output channels. Each element’s value is the input channel number, starting with zero, that the framework copies to that output.
A negative value means that the output channel doesn’t have a source and is silent.
Setting a channel map overrides channel mapping due to any channel layouts in the input and output formats that you supply.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/channelMap
func (AVAudioConverter) ContentSource ¶
func (a AVAudioConverter) ContentSource() AVAudioContentSource
Discussion ¶
Index to select a pre-defined content source type that describes the content type and how it was generated. Note: This is only supported when compressing audio to formats which support it.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/contentSource
func (AVAudioConverter) ConvertToBufferErrorWithInputFromBlock ¶
func (a AVAudioConverter) ConvertToBufferErrorWithInputFromBlock(outputBuffer IAVAudioBuffer, outError foundation.INSError, inputBlock AVAudioConverterInputBlock) AVAudioConverterOutputStatus
Performs a conversion between audio formats, if the system supports it.
outputBuffer: The output audio buffer.
outError: The error if the conversion fails.
inputBlock: A block the framework calls to get input data.
Return Value ¶
An AVAudioConverterOutputStatus type that indicates the conversion status.
Discussion ¶
The method attempts to fill the buffer to its capacity. On return, the buffer’s length indicates the number of sample frames the framework successfully converts.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/convert(to:error:withInputFrom:)
func (AVAudioConverter) ConvertToBufferFromBufferError ¶
func (a AVAudioConverter) ConvertToBufferFromBufferError(outputBuffer IAVAudioPCMBuffer, inputBuffer IAVAudioPCMBuffer) (bool, error)
Performs a basic conversion between audio formats that doesn’t involve converting codecs or sample rates.
outputBuffer: The output audio buffer.
inputBuffer: The input audio buffer.
Discussion ¶
The output buffer’s [FrameCapacity] value needs to be at least at large as the [FrameLength] value of the `inputBuffer`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/convert(to:from:)
func (AVAudioConverter) Dither ¶
func (a AVAudioConverter) Dither() bool
A Boolean value that indicates whether dither is on.
Discussion ¶
This property defaults to `false`. When `true`, the framework determines whether dithering makes sense for the formats and settings.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/dither
func (AVAudioConverter) Downmix ¶
func (a AVAudioConverter) Downmix() bool
A Boolean value that indicates whether the framework mixes the channels instead of remapping.
Discussion ¶
This property defaults to `false`, indicating that the framework remaps the channels. When `true`, and channel remapping is necessary, the framework mixes the channels.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/downmix
func (AVAudioConverter) DynamicRangeControlConfiguration ¶
func (a AVAudioConverter) DynamicRangeControlConfiguration() AVAudioDynamicRangeControlConfiguration
Discussion ¶
Encoder Dynamic Range Control (DRC) configuration.
When supported by the encoder, this property controls which configuration is applied when a bitstream is generated. Note: This is only supported when compressing audio to formats which support it.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/dynamicRangeControlConfiguration
func (AVAudioConverter) Init ¶
func (a AVAudioConverter) Init() AVAudioConverter
Init initializes the instance.
func (AVAudioConverter) InitFromFormatToFormat ¶
func (a AVAudioConverter) InitFromFormatToFormat(fromFormat IAVAudioFormat, toFormat IAVAudioFormat) AVAudioConverter
Creates an audio converter object from the specified input and output formats.
fromFormat: The input audio format.
toFormat: The audio format to convert to.
Return Value ¶
An AVAudioConverter instance, or `nil` if the format conversion isn’t possible.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/init(from:to:)
func (AVAudioConverter) InputFormat ¶
func (a AVAudioConverter) InputFormat() IAVAudioFormat
The format of the input audio stream.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/inputFormat
func (AVAudioConverter) MagicCookie ¶
func (a AVAudioConverter) MagicCookie() foundation.INSData
An object that contains metadata for encoders and decoders.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/magicCookie
func (AVAudioConverter) MaximumOutputPacketSize ¶
func (a AVAudioConverter) MaximumOutputPacketSize() int
The maximum size of an output packet, in bytes.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/maximumOutputPacketSize
func (AVAudioConverter) OutputFormat ¶
func (a AVAudioConverter) OutputFormat() IAVAudioFormat
The format of the output audio stream.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/outputFormat
func (AVAudioConverter) PrimeInfo ¶
func (a AVAudioConverter) PrimeInfo() AVAudioConverterPrimeInfo
The number of priming frames the converter uses.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/primeInfo
func (AVAudioConverter) PrimeMethod ¶
func (a AVAudioConverter) PrimeMethod() AVAudioConverterPrimeMethod
The priming method the sample rate converter or decoder uses.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/primeMethod
func (AVAudioConverter) Reset ¶
func (a AVAudioConverter) Reset()
Resets the converter so you can convert a new audio stream.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/reset()
func (AVAudioConverter) SampleRateConverterAlgorithm ¶
func (a AVAudioConverter) SampleRateConverterAlgorithm() string
The priming method the sample rate converter or decoder uses.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/sampleRateConverterAlgorithm
func (AVAudioConverter) SampleRateConverterQuality ¶
func (a AVAudioConverter) SampleRateConverterQuality() int
A sample rate converter algorithm key value.
Discussion ¶
For information about possible key values, see AVSampleRateConverterAlgorithmKey.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter/sampleRateConverterQuality
func (AVAudioConverter) SetAudioSyncPacketFrequency ¶
func (a AVAudioConverter) SetAudioSyncPacketFrequency(value int)
func (AVAudioConverter) SetBitRate ¶
func (a AVAudioConverter) SetBitRate(value int)
func (AVAudioConverter) SetBitRateStrategy ¶
func (a AVAudioConverter) SetBitRateStrategy(value string)
func (AVAudioConverter) SetChannelMap ¶
func (a AVAudioConverter) SetChannelMap(value []foundation.NSNumber)
func (AVAudioConverter) SetContentSource ¶
func (a AVAudioConverter) SetContentSource(value AVAudioContentSource)
func (AVAudioConverter) SetDither ¶
func (a AVAudioConverter) SetDither(value bool)
func (AVAudioConverter) SetDownmix ¶
func (a AVAudioConverter) SetDownmix(value bool)
func (AVAudioConverter) SetDynamicRangeControlConfiguration ¶
func (a AVAudioConverter) SetDynamicRangeControlConfiguration(value AVAudioDynamicRangeControlConfiguration)
func (AVAudioConverter) SetMagicCookie ¶
func (a AVAudioConverter) SetMagicCookie(value foundation.INSData)
func (AVAudioConverter) SetPrimeInfo ¶
func (a AVAudioConverter) SetPrimeInfo(value AVAudioConverterPrimeInfo)
func (AVAudioConverter) SetPrimeMethod ¶
func (a AVAudioConverter) SetPrimeMethod(value AVAudioConverterPrimeMethod)
func (AVAudioConverter) SetSampleRateConverterAlgorithm ¶
func (a AVAudioConverter) SetSampleRateConverterAlgorithm(value string)
func (AVAudioConverter) SetSampleRateConverterQuality ¶
func (a AVAudioConverter) SetSampleRateConverterQuality(value int)
type AVAudioConverterClass ¶
type AVAudioConverterClass struct {
// contains filtered or unexported fields
}
func GetAVAudioConverterClass ¶
func GetAVAudioConverterClass() AVAudioConverterClass
GetAVAudioConverterClass returns the class object for AVAudioConverter.
func (AVAudioConverterClass) Alloc ¶
func (ac AVAudioConverterClass) Alloc() AVAudioConverter
Alloc allocates memory for a new instance of the class.
func (AVAudioConverterClass) Class ¶
func (ac AVAudioConverterClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioConverterInputBlock ¶
type AVAudioConverterInputBlock = func(uint32, *AVAudioConverterInputStatus) AVAudioBuffer
AVAudioConverterInputBlock is a block to get input data for conversion, as necessary.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverterInputBlock
type AVAudioConverterInputStatus ¶
type AVAudioConverterInputStatus int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverterInputStatus
const ( // AVAudioConverterInputStatus_EndOfStream: A status that indicates you’re at the end of an audio stream. AVAudioConverterInputStatus_EndOfStream AVAudioConverterInputStatus = 2 // AVAudioConverterInputStatus_HaveData: A status that indicates the normal case where you supply data to the converter. AVAudioConverterInputStatus_HaveData AVAudioConverterInputStatus = 0 // AVAudioConverterInputStatus_NoDataNow: A status that indicates you’re out of data. AVAudioConverterInputStatus_NoDataNow AVAudioConverterInputStatus = 1 )
func (AVAudioConverterInputStatus) String ¶
func (e AVAudioConverterInputStatus) String() string
type AVAudioConverterOutputStatus ¶
type AVAudioConverterOutputStatus int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverterOutputStatus
const ( // AVAudioConverterOutputStatus_EndOfStream: A status that indicates the method reaches the end of the stream, and doesn’t return any data. AVAudioConverterOutputStatus_EndOfStream AVAudioConverterOutputStatus = 2 // AVAudioConverterOutputStatus_Error: A status that indicates the method encounters an error. AVAudioConverterOutputStatus_Error AVAudioConverterOutputStatus = 3 // AVAudioConverterOutputStatus_HaveData: A status that indicates that the method returns all of the requested data. AVAudioConverterOutputStatus_HaveData AVAudioConverterOutputStatus = 0 // AVAudioConverterOutputStatus_InputRanDry: A status that indicates the method doesn’t have enough input available to satisfy the request. AVAudioConverterOutputStatus_InputRanDry AVAudioConverterOutputStatus = 1 )
func (AVAudioConverterOutputStatus) String ¶
func (e AVAudioConverterOutputStatus) String() string
type AVAudioConverterPrimeInfo ¶
type AVAudioConverterPrimeInfo struct {
LeadingFrames AVAudioFrameCount // The number of leading (previous) input frames the converter requires to perform a high-quality conversion.
TrailingFrames AVAudioFrameCount // The number of trailing input frames, past the end input frame, the converter requires to perform a high-quality conversion.
}
AVAudioConverterPrimeInfo - Priming information for audio conversion.
[Full Topic] [Full Topic]: https://developer.apple.com/documentation/AVFAudio/AVAudioConverterPrimeInfo
type AVAudioConverterPrimeMethod ¶
type AVAudioConverterPrimeMethod int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverterPrimeMethod
const ( // AVAudioConverterPrimeMethod_None: An option to prime the converter assumes leading and trailing frames are silence. AVAudioConverterPrimeMethod_None AVAudioConverterPrimeMethod = 2 // AVAudioConverterPrimeMethod_Normal: An option to prime with trailing (zero latency) frames where the converter assumes the leading frames are silence. AVAudioConverterPrimeMethod_Normal AVAudioConverterPrimeMethod = 1 // AVAudioConverterPrimeMethod_Pre: An option to prime with leading and trailing input frames. AVAudioConverterPrimeMethod_Pre AVAudioConverterPrimeMethod = 0 )
func (AVAudioConverterPrimeMethod) String ¶
func (e AVAudioConverterPrimeMethod) String() string
type AVAudioDynamicRangeControlConfiguration ¶
type AVAudioDynamicRangeControlConfiguration int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioDynamicRangeControlConfiguration
const ( AVAudioDynamicRangeControlConfiguration_Capture AVAudioDynamicRangeControlConfiguration = 4 AVAudioDynamicRangeControlConfiguration_Movie AVAudioDynamicRangeControlConfiguration = 3 AVAudioDynamicRangeControlConfiguration_Music AVAudioDynamicRangeControlConfiguration = 1 AVAudioDynamicRangeControlConfiguration_None AVAudioDynamicRangeControlConfiguration = 0 AVAudioDynamicRangeControlConfiguration_Speech AVAudioDynamicRangeControlConfiguration = 2 )
func (AVAudioDynamicRangeControlConfiguration) String ¶
func (e AVAudioDynamicRangeControlConfiguration) String() string
type AVAudioEngine ¶
type AVAudioEngine struct {
objectivec.Object
}
An object that manages a graph of audio nodes, controls playback, and configures real-time rendering constraints.
Overview ¶
An audio engine object contains a group of AVAudioNode instances that you attach to form an audio processing chain.
[media-3901205]
You can connect, disconnect, and remove audio nodes during runtime with minor limitations. Removing an audio node that has differing channel counts, or that’s a mixer, can break the graph. Reconnect audio nodes only when they’re upstream of a mixer.
By default, Audio Engine renders to a connected audio device in real time. You can configure the engine to operate in manual rendering mode when you need to render at, or faster than, real time. In that mode, the engine disconnects from audio devices and your app drives the rendering.
Create an Engine for Audio File Playback ¶
To play an audio file, you create an AVAudioFile with a file that’s open for reading. Create an audio engine object and an AVAudioPlayerNode instance, and then attach the player node to the engine. Next, connect the player node to the audio engine’s output node. The engine performs audio output through an output node, which is a singleton that the engine creates the first time you access it.
Then schedule the audio file for full playback. The callback notifies your app when playback completes.
Before you play the audio, start the engine.
When you’re done, stop the player and the engine.
Attaching and Detaching Audio Nodes ¶
- AVAudioEngine.AttachNode: Attaches an audio node to the audio engine.
- AVAudioEngine.DetachNode: Detaches an audio node from the audio engine.
- AVAudioEngine.AttachedNodes: A read-only set that contains the nodes you attach to the audio engine.
Getting the Input, Output, and Main Mixer Nodes ¶
- AVAudioEngine.InputNode: The audio engine’s singleton input audio node.
- AVAudioEngine.OutputNode: The audio engine’s singleton output audio node.
- AVAudioEngine.MainMixerNode: The audio engine’s optional singleton main mixer node.
Connecting and Disconnecting Audio Nodes ¶
- AVAudioEngine.ConnectToFormat: Establishes a connection between two nodes.
- AVAudioEngine.ConnectToFromBusToBusFormat: Establishes a connection between two nodes, specifying the input and output busses.
- AVAudioEngine.DisconnectNodeInput: Removes all input connections of the node.
- AVAudioEngine.DisconnectNodeInputBus: Removes the input connection of a node on the specified bus.
- AVAudioEngine.DisconnectNodeOutput: Removes all output connections of a node.
- AVAudioEngine.DisconnectNodeOutputBus: Removes the output connection of a node on the specified bus.
Managing MIDI Nodes ¶
- AVAudioEngine.ConnectMIDIToFormatEventListBlock: Establishes a MIDI connection between two nodes.
- AVAudioEngine.ConnectMIDIToNodesFormatEventListBlock: Establishes a MIDI connection between a source node and multiple destination nodes.
- AVAudioEngine.DisconnectMIDIFrom: Removes a MIDI connection between two nodes.
- AVAudioEngine.DisconnectMIDIFromNodes: Removes a MIDI connection between one source node and multiple destination nodes.
- AVAudioEngine.DisconnectMIDIInput: Disconnects all input MIDI connections from a node.
- AVAudioEngine.DisconnectMIDIOutput: Disconnects all output MIDI connections from a node.
Playing Audio ¶
- AVAudioEngine.Prepare: Prepares the audio engine for starting.
- AVAudioEngine.StartAndReturnError: Starts the audio engine.
- AVAudioEngine.Running: A Boolean value that indicates whether the audio engine is running.
- AVAudioEngine.Pause: Pauses the audio engine.
- AVAudioEngine.Stop: Stops the audio engine and releases any previously prepared resources.
- AVAudioEngine.Reset: Resets all audio nodes in the audio engine.
- AVAudioEngine.MusicSequence: The music sequence instance that you attach to the audio engine, if any.
- AVAudioEngine.SetMusicSequence
Manually Rendering an Audio Engine ¶
- AVAudioEngine.EnableManualRenderingModeFormatMaximumFrameCountError: Sets the engine to operate in manual rendering mode with the render format and maximum frame count you specify.
- AVAudioEngine.DisableManualRenderingMode: Sets the engine to render to or from an audio device.
- AVAudioEngine.RenderOfflineToBufferError: Makes a render call to the engine operating in the offline manual rendering mode.
Getting Manual Rendering Properties ¶
- AVAudioEngine.ManualRenderingBlock: The block that renders the engine when operating in manual rendering mode.
- AVAudioEngine.ManualRenderingFormat: The render format of the engine in manual rendering mode.
- AVAudioEngine.ManualRenderingMaximumFrameCount: The maximum number of PCM sample frames the engine produces in any single render call in manual rendering mode.
- AVAudioEngine.ManualRenderingMode: The manual rendering mode configured on the engine.
- AVAudioEngine.ManualRenderingSampleTime: An indication of where the engine is on its render timeline in manual rendering mode.
- AVAudioEngine.AutoShutdownEnabled: A Boolean value that indicates whether autoshutdown is in an enabled state.
- AVAudioEngine.SetAutoShutdownEnabled
- AVAudioEngine.IsInManualRenderingMode: A Boolean value that indicates whether the engine is operating in manual rendering mode.
Using Connection Points ¶
- AVAudioEngine.ConnectToConnectionPointsFromBusFormat: Establishes a connection between a source node and multiple destination nodes.
- AVAudioEngine.InputConnectionPointForNodeInputBus: Returns connection information about a node’s input bus.
- AVAudioEngine.OutputConnectionPointsForNodeOutputBus: Returns connection information about a node’s output bus.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine
func AVAudioEngineFromID ¶
func AVAudioEngineFromID(id objc.ID) AVAudioEngine
AVAudioEngineFromID constructs a AVAudioEngine from an objc.ID.
An object that manages a graph of audio nodes, controls playback, and configures real-time rendering constraints.
func NewAVAudioEngine ¶
func NewAVAudioEngine() AVAudioEngine
NewAVAudioEngine creates a new AVAudioEngine instance.
func (AVAudioEngine) AttachNode ¶
func (a AVAudioEngine) AttachNode(node IAVAudioNode)
Attaches an audio node to the audio engine.
node: The audio node to attach.
Discussion ¶
An instance of AVAudioNode isn’t usable until you attach it to the audio engine using this method.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/attach(_:)
func (AVAudioEngine) AttachedNodes ¶
func (a AVAudioEngine) AttachedNodes() foundation.INSSet
A read-only set that contains the nodes you attach to the audio engine.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/attachedNodes
func (AVAudioEngine) AutoShutdownEnabled ¶
func (a AVAudioEngine) AutoShutdownEnabled() bool
A Boolean value that indicates whether autoshutdown is in an enabled state.
Discussion ¶
If autoshutdown is in an enabled state, the engine can start and stop the audio hardware dynamically to conserve power. In watchOS, autoshutdown is always in an enabled state. For other platforms, it’s in a disabled state by default.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/isAutoShutdownEnabled
func (AVAudioEngine) Autorelease ¶
func (a AVAudioEngine) Autorelease() AVAudioEngine
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioEngine) ConnectMIDIToFormatEventListBlock ¶
func (a AVAudioEngine) ConnectMIDIToFormatEventListBlock(sourceNode IAVAudioNode, destinationNode IAVAudioNode, format IAVAudioFormat, tapBlock objectivec.IObject)
Establishes a MIDI connection between two nodes.
sourceNode: The source node.
destinationNode: The destination node.
format: If not [NULL], the engine uses this value for the format of the source audio node’s output bus. In all cases, the engine matches the format of the destination audio node’s input bus to the source node’s output bus.
tapBlock: If not [NULL], the source node’s event list block calls this on the real-time thread. The host can tap the MIDI data of the source node through this block.
tapBlock is a [audiotoolbox.AUMIDIEventListBlock].
Discussion ¶
Use this to establish a MIDI connection between a source node and a destination node that has MIDI input capability. This method disconnects any existing MIDI connection that involves the destination node. When making the MIDI connection, this method overwrites the source node’s event list block.
The source node can only be an AVAudioUnit node with the type kAudioUnitType_MIDIProcessor. The destination node types can be kAudioUnitType_MusicDevice, kAudioUnitType_MusicEffect, or kAudioUnitType_MIDIProcessor.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/connectMIDI(_:to:format:eventListBlock:)-73cd1 tapBlock is a [audiotoolbox.AUMIDIEventListBlock].
func (AVAudioEngine) ConnectMIDIToNodesFormatEventListBlock ¶
func (a AVAudioEngine) ConnectMIDIToNodesFormatEventListBlock(sourceNode IAVAudioNode, destinationNodes []AVAudioNode, format IAVAudioFormat, tapBlock objectivec.IObject)
Establishes a MIDI connection between a source node and multiple destination nodes.
sourceNode: The source node.
destinationNodes: An array of objects that specify the destination nodes.
format: If not [NULL], the engine uses this value for the format of the source audio node’s output bus. In all cases, the format of the source node’s output bus has to match with the destination node’s output bus format.
tapBlock: If not [NULL], the source node’s event list block calls this on the real-time thread. The host can tap the MIDI data of the source node through this block.
tapBlock is a [audiotoolbox.AUMIDIEventListBlock].
Discussion ¶
Use this to establish a MIDI connection between a source node and multiple destination nodes that have MIDI input capability. This method disconnects any existing MIDI connection that involves the destination node. When making the MIDI connection, this method overwrites the source node’s event list block.
The source node can only be an AVAudioUnit node with the type kAudioUnitType_MIDIProcessor. The destination node types can be kAudioUnitType_MusicDevice, kAudioUnitType_MusicEffect, or kAudioUnitType_MIDIProcessor.
MIDI connections made with this method specify a single destination connection (one-to-one) or multiple connections (one-to-many), but never many-to-one.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/connectMIDI(_:to:format:eventListBlock:)-7qtd5 tapBlock is a [audiotoolbox.AUMIDIEventListBlock].
func (AVAudioEngine) ConnectToConnectionPointsFromBusFormat ¶
func (a AVAudioEngine) ConnectToConnectionPointsFromBusFormat(sourceNode IAVAudioNode, destNodes []AVAudioConnectionPoint, sourceBus AVAudioNodeBus, format IAVAudioFormat)
Establishes a connection between a source node and multiple destination nodes.
sourceNode: The source node.
destNodes: An array of AVAudioConnectionPoint objects that specify destination nodes and busses.
sourceBus: The output bus on the source node.
format: If not [NULL], the framework uses this value for the format of the source audio node’s output bus. In all cases, the framework matches the format of the destination audio node’s input bus to the source audio node’s output bus.
Discussion ¶
Connections that use this method are either one-to-one or one-to-many.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/connect(_:to:fromBus:format:)
func (AVAudioEngine) ConnectToFormat ¶
func (a AVAudioEngine) ConnectToFormat(node1 IAVAudioNode, node2 IAVAudioNode, format IAVAudioFormat)
Establishes a connection between two nodes.
node1: The source audio node.
node2: The destination audio node.
format: If not [NULL], the engine uses this value for the format of the source audio node’s output bus. In all cases, the engine matches the format of the destination audio node’s input bus to the source audio node’s output bus.
Discussion ¶
This method calls [ConnectToFromBusToBusFormat] using bus `0` for the source audio node, and bus `0` for the destination audio node, except when a destination is a mixer, in which case, the destination is the mixer’s [NextAvailableInputBus].
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/connect(_:to:format:)
func (AVAudioEngine) ConnectToFromBusToBusFormat ¶
func (a AVAudioEngine) ConnectToFromBusToBusFormat(node1 IAVAudioNode, node2 IAVAudioNode, bus1 AVAudioNodeBus, bus2 AVAudioNodeBus, format IAVAudioFormat)
Establishes a connection between two nodes, specifying the input and output busses.
node1: The source audio node.
node2: The destination audio node.
bus1: The output bus of the source audio node.
bus2: The input bus of the destination audio node.
format: If not [NULL], the engine uses this value for the format of the source audio node’s output bus. In all cases, the engine matches the format of the destination audio node’s input bus to the source audio node’s output bus.
Discussion ¶
Audio nodes have input and output busses (AVAudioNodeBus). Use this method to establish connections between audio nodes. Connections are always one-to-one, never one-to-many or many-to-one.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/connect(_:to:fromBus:toBus:format:)
func (AVAudioEngine) DetachNode ¶
func (a AVAudioEngine) DetachNode(node IAVAudioNode)
Detaches an audio node from the audio engine.
node: The audio node to detach.
Discussion ¶
If necessary, the audio engine safely disconnects the audio node before detaching it.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/detach(_:)
func (AVAudioEngine) DisableManualRenderingMode ¶
func (a AVAudioEngine) DisableManualRenderingMode()
Sets the engine to render to or from an audio device.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/disableManualRenderingMode()
func (AVAudioEngine) DisconnectMIDIFrom ¶
func (a AVAudioEngine) DisconnectMIDIFrom(sourceNode IAVAudioNode, destinationNode IAVAudioNode)
Removes a MIDI connection between two nodes.
sourceNode: The node with the MIDI output to disconnect.
destinationNode: The node with the MIDI input to disconnect.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/disconnectMIDI(_:from:)-1kssy
func (AVAudioEngine) DisconnectMIDIFromNodes ¶
func (a AVAudioEngine) DisconnectMIDIFromNodes(sourceNode IAVAudioNode, destinationNodes []AVAudioNode)
Removes a MIDI connection between one source node and multiple destination nodes.
sourceNode: The node with the MIDI output to disconnect.
destinationNodes: A list of nodes with the MIDI input to disconnect.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/disconnectMIDI(_:from:)-7oaab
func (AVAudioEngine) DisconnectMIDIInput ¶
func (a AVAudioEngine) DisconnectMIDIInput(node IAVAudioNode)
Disconnects all input MIDI connections from a node.
node: The node with the MIDI input to disconnect.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/disconnectMIDIInput(_:)
func (AVAudioEngine) DisconnectMIDIOutput ¶
func (a AVAudioEngine) DisconnectMIDIOutput(node IAVAudioNode)
Disconnects all output MIDI connections from a node.
node: The node with the MIDI outputs to disconnect.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/disconnectMIDIOutput(_:)
func (AVAudioEngine) DisconnectNodeInput ¶
func (a AVAudioEngine) DisconnectNodeInput(node IAVAudioNode)
Removes all input connections of the node.
node: The audio node with the inputs you want to disconnect.
Discussion ¶
Connections break on each of the audio node’s input buses.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/disconnectNodeInput(_:)
func (AVAudioEngine) DisconnectNodeInputBus ¶
func (a AVAudioEngine) DisconnectNodeInputBus(node IAVAudioNode, bus AVAudioNodeBus)
Removes the input connection of a node on the specified bus.
node: The audio node with the input to disconnect.
bus: The destination’s input bus to disconnect.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/disconnectNodeInput(_:bus:)
func (AVAudioEngine) DisconnectNodeOutput ¶
func (a AVAudioEngine) DisconnectNodeOutput(node IAVAudioNode)
Removes all output connections of a node.
node: The audio node with the outputs to disconnect.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/disconnectNodeOutput(_:)
func (AVAudioEngine) DisconnectNodeOutputBus ¶
func (a AVAudioEngine) DisconnectNodeOutputBus(node IAVAudioNode, bus AVAudioNodeBus)
Removes the output connection of a node on the specified bus.
node: The audio node with the output to disconnect.
bus: The destination’s output bus to disconnect.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/disconnectNodeOutput(_:bus:)
func (AVAudioEngine) EnableManualRenderingModeFormatMaximumFrameCountError ¶
func (a AVAudioEngine) EnableManualRenderingModeFormatMaximumFrameCountError(mode AVAudioEngineManualRenderingMode, pcmFormat IAVAudioFormat, maximumFrameCount AVAudioFrameCount) (bool, error)
Sets the engine to operate in manual rendering mode with the render format and maximum frame count you specify.
mode: The manual rendering mode to use.
pcmFormat: The format of the output PCM audio data from the engine.
maximumFrameCount: The maximum number of PCM sample frames the engine produces in a single render call.
Discussion ¶
Use this method to configure the engine to render in response to requests from the client. You must stop the engine before calling this method. The render format must be a PCM format and match the format of the rendering buffer.
The source nodes can supply the input data in manual rendering mode. For more information, see AVAudioPlayerNode and AVAudioInputNode.
func (AVAudioEngine) Init ¶
func (a AVAudioEngine) Init() AVAudioEngine
Init initializes the instance.
func (AVAudioEngine) InputConnectionPointForNodeInputBus ¶
func (a AVAudioEngine) InputConnectionPointForNodeInputBus(node IAVAudioNode, bus AVAudioNodeBus) IAVAudioConnectionPoint
Returns connection information about a node’s input bus.
node: The node with the input connection you’re querying.
bus: The node’s input bus for the connection you’re querying.
Return Value ¶
An AVAudioConnectionPoint object with connection information on the node’s input bus.
Discussion ¶
Connections are always one-to-one or one-to-many. This method returns `nil` if there’s no connection on the node’s specified input bus.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/inputConnectionPoint(for:inputBus:)
func (AVAudioEngine) InputNode ¶
func (a AVAudioEngine) InputNode() IAVAudioInputNode
The audio engine’s singleton input audio node.
Discussion ¶
The framework performs audio input through an input node. The audio engine creates a singleton on demand when first accessing this variable. To receive input, connect another node from the output of the input node, or create a recording tap on it.
When the engine renders to and from an audio device, the AVAudioSession category and the availability of hardware determines whether an app performs input (for example, input hardware isn’t available in tvOS). Check the input node’s input format (specifically, the hardware format) for a nonzero sample rate and channel count to see if input is in an enabled state.
Trying to perform input through the input node when it isn’t available or in an enabled state causes the engine to throw an error (when possible) or an exception.
In manual rendering mode, the input node can synchronously supply data to the engine while it’s rendering. For more information, see [SetManualRenderingInputPCMFormatInputBlock].
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/inputNode
func (AVAudioEngine) IsInManualRenderingMode ¶
func (a AVAudioEngine) IsInManualRenderingMode() bool
A Boolean value that indicates whether the engine is operating in manual rendering mode.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/isInManualRenderingMode
func (AVAudioEngine) MainMixerNode ¶
func (a AVAudioEngine) MainMixerNode() IAVAudioMixerNode
The audio engine’s optional singleton main mixer node.
Discussion ¶
The audio engine constructs a singleton main mixer and connects it to the [OutputNode] when first accessing this property. You can then connect additional audio nodes to the mixer.
If the client never sets the connection format between the `mainMixerNode` and the `outputNode`, the engine always updates the format to track the format of the `outputNode` on startup or restart, even after an AVAudioEngineConfigurationChangeNotification. Otherwise, it’s the client’s responsibility to update the connection format after an AVAudioEngineConfigurationChangeNotification.
By default, the mixer’s output format (sample rate and channel count) tracks the format of the output node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/mainMixerNode
func (AVAudioEngine) ManualRenderingBlock ¶
func (a AVAudioEngine) ManualRenderingBlock() AVAudioEngineManualRenderingBlock
The block that renders the engine when operating in manual rendering mode.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/manualRenderingBlock
func (AVAudioEngine) ManualRenderingFormat ¶
func (a AVAudioEngine) ManualRenderingFormat() IAVAudioFormat
The render format of the engine in manual rendering mode.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/manualRenderingFormat
func (AVAudioEngine) ManualRenderingMaximumFrameCount ¶
func (a AVAudioEngine) ManualRenderingMaximumFrameCount() AVAudioFrameCount
The maximum number of PCM sample frames the engine produces in any single render call in manual rendering mode.
Discussion ¶
If you get this property when the engine isn’t in manual rendering mode, it returns zero.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/manualRenderingMaximumFrameCount
func (AVAudioEngine) ManualRenderingMode ¶
func (a AVAudioEngine) ManualRenderingMode() AVAudioEngineManualRenderingMode
The manual rendering mode configured on the engine.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/manualRenderingMode
func (AVAudioEngine) ManualRenderingSampleTime ¶
func (a AVAudioEngine) ManualRenderingSampleTime() AVAudioFramePosition
An indication of where the engine is on its render timeline in manual rendering mode.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/manualRenderingSampleTime
func (AVAudioEngine) MusicSequence ¶
func (a AVAudioEngine) MusicSequence() objectivec.IObject
The music sequence instance that you attach to the audio engine, if any.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/musicSequence
func (AVAudioEngine) OutputConnectionPointsForNodeOutputBus ¶
func (a AVAudioEngine) OutputConnectionPointsForNodeOutputBus(node IAVAudioNode, bus AVAudioNodeBus) []AVAudioConnectionPoint
Returns connection information about a node’s output bus.
node: The node with the output connections you’re querying.
bus: The node’s output bus for connections you’re querying.
Return Value ¶
An array of AVAudioConnectionPoint objects with connection information on the node’s output bus.
Discussion ¶
Connections are always one-to-one or one-to-many. This method returns an empty array if there are no connections on the node’s specified output bus.
func (AVAudioEngine) OutputNode ¶
func (a AVAudioEngine) OutputNode() IAVAudioOutputNode
The audio engine’s singleton output audio node.
Discussion ¶
The framework performs audio output through an output node. The audio engine creates a singleton on demand when first accessing this variable. Connect another node to the input of the output node, or get a mixer using the [MainMixerNode] property.
When the engine renders to and from an audio device, the AVAudioSession category and the availability of hardware determines whether an app performs output. Check the output node’s output format (specifically, the hardware format) for a nonzero sample rate and channel count to see if output is in an enabled state.
Trying to perform output through the output node when it isn’t available or in an enabled state causes the engine to throw an error (when possible) or an exception.
In manual rendering mode, the output node’s format determines the render format of the engine. For more information about changing it, see [EnableManualRenderingModeFormatMaximumFrameCountError].
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/outputNode
func (AVAudioEngine) Pause ¶
func (a AVAudioEngine) Pause()
Pauses the audio engine.
Discussion ¶
This method stops the audio engine and the audio hardware, but doesn’t deallocate the resources for the [Prepare] method. When your app doesn’t need to play audio, consider pausing or stopping the engine to minimize power consumption.
You resume the audio engine by invoking [StartAndReturnError].
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/pause()
func (AVAudioEngine) Prepare ¶
func (a AVAudioEngine) Prepare()
Prepares the audio engine for starting.
Discussion ¶
This method preallocates many resources the audio engine requires to start. Use it to responsively start audio input or output.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/prepare()
func (AVAudioEngine) RenderOfflineToBufferError ¶
func (a AVAudioEngine) RenderOfflineToBufferError(numberOfFrames AVAudioFrameCount, buffer IAVAudioPCMBuffer) (AVAudioEngineManualRenderingStatus, error)
Makes a render call to the engine operating in the offline manual rendering mode.
numberOfFrames: The number of PCM sample frames to render.
buffer: The PCM buffer the engine must render the audio for.
Return Value ¶
One of the status codes from AVAudioEngineManualRenderingStatus. Irrespective of the returned status code, on exit, the output buffer’s [FrameLength] indicates the number of PCM samples the engine renders.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/renderOffline(_:to:)
func (AVAudioEngine) Reset ¶
func (a AVAudioEngine) Reset()
Resets all audio nodes in the audio engine.
Discussion ¶
This methods resets all audio nodes in the audio engine. For example, use it to silence reverb and delay tails.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/reset()
func (AVAudioEngine) Running ¶
func (a AVAudioEngine) Running() bool
A Boolean value that indicates whether the audio engine is running.
Discussion ¶
The value is true if the audio engine is in a running state; otherwise, false.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/isRunning
func (AVAudioEngine) SetAutoShutdownEnabled ¶
func (a AVAudioEngine) SetAutoShutdownEnabled(value bool)
func (AVAudioEngine) SetMusicSequence ¶
func (a AVAudioEngine) SetMusicSequence(value objectivec.IObject)
func (AVAudioEngine) StartAndReturnError ¶
func (a AVAudioEngine) StartAndReturnError() (bool, error)
Starts the audio engine.
Discussion ¶
This method calls the [Prepare] method if you don’t call it after invoking [Stop]. It then starts the audio hardware through the AVAudioInputNode and AVAudioOutputNode instances in the audio engine. This method throws an error when:
- There’s a problem in the structure of the graph, such as the input can’t route to an output or to a recording tap through converter nodes. - An AVAudioSession error occurs. - The driver fails to start the hardware.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/start()
func (AVAudioEngine) Stop ¶
func (a AVAudioEngine) Stop()
Stops the audio engine and releases any previously prepared resources.
Discussion ¶
This method stops the audio engine and the audio hardware, and releases any allocated resources for the [Prepare] method. When your app doesn’t need to play audio, consider pausing or stopping the engine to minimize power consumption.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine/stop()
type AVAudioEngineClass ¶
type AVAudioEngineClass struct {
// contains filtered or unexported fields
}
func GetAVAudioEngineClass ¶
func GetAVAudioEngineClass() AVAudioEngineClass
GetAVAudioEngineClass returns the class object for AVAudioEngine.
func (AVAudioEngineClass) Alloc ¶
func (ac AVAudioEngineClass) Alloc() AVAudioEngine
Alloc allocates memory for a new instance of the class.
func (AVAudioEngineClass) Class ¶
func (ac AVAudioEngineClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioEngineManualRenderingBlock ¶
type AVAudioEngineManualRenderingBlock = func(uint32, objectivec.IObject, *int) AVAudioEngineManualRenderingStatus
AVAudioEngineManualRenderingBlock is the type that represents a block that renders the engine when operating in manual rendering mode.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngineManualRenderingBlock
type AVAudioEngineManualRenderingError ¶
type AVAudioEngineManualRenderingError int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngineManualRenderingError
const ( // AVAudioEngineManualRenderingErrorInitialized: An operation that the system can’t perform because the engine is still running. AVAudioEngineManualRenderingErrorInitialized AVAudioEngineManualRenderingError = -80801 // AVAudioEngineManualRenderingErrorInvalidMode: An operation the system can’t perform because the engine isn’t in manual rendering mode or the right variant of it. AVAudioEngineManualRenderingErrorInvalidMode AVAudioEngineManualRenderingError = -80800 // AVAudioEngineManualRenderingErrorNotRunning: An operation the system can’t perform because the engine isn’t running. AVAudioEngineManualRenderingErrorNotRunning AVAudioEngineManualRenderingError = -80802 )
func (AVAudioEngineManualRenderingError) String ¶
func (e AVAudioEngineManualRenderingError) String() string
type AVAudioEngineManualRenderingMode ¶
type AVAudioEngineManualRenderingMode int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngineManualRenderingMode
const ( // AVAudioEngineManualRenderingModeOffline: An engine that operates in an offline mode. AVAudioEngineManualRenderingModeOffline AVAudioEngineManualRenderingMode = 0 // AVAudioEngineManualRenderingModeRealtime: An engine that operates under real-time constraints and doesn’t make blocking calls while rendering. AVAudioEngineManualRenderingModeRealtime AVAudioEngineManualRenderingMode = 1 )
func (AVAudioEngineManualRenderingMode) String ¶
func (e AVAudioEngineManualRenderingMode) String() string
type AVAudioEngineManualRenderingStatus ¶
type AVAudioEngineManualRenderingStatus int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngineManualRenderingStatus
const ( // AVAudioEngineManualRenderingStatusCannotDoInCurrentContext: An operation that the system can’t perform under the current conditions. AVAudioEngineManualRenderingStatusCannotDoInCurrentContext AVAudioEngineManualRenderingStatus = 2 // AVAudioEngineManualRenderingStatusError: A problem that occurs during rendering and results in no data returning. AVAudioEngineManualRenderingStatusError AVAudioEngineManualRenderingStatus = -1 // AVAudioEngineManualRenderingStatusInsufficientDataFromInputNode: A condition that occurs when the input node doesn’t return enough input data to satisfy the render request at the time of the request. AVAudioEngineManualRenderingStatusInsufficientDataFromInputNode AVAudioEngineManualRenderingStatus = 1 // AVAudioEngineManualRenderingStatusSuccess: A status that indicates the successful return of the requested data. AVAudioEngineManualRenderingStatusSuccess AVAudioEngineManualRenderingStatus = 0 )
func (AVAudioEngineManualRenderingStatus) String ¶
func (e AVAudioEngineManualRenderingStatus) String() string
type AVAudioEnvironmentDistanceAttenuationModel ¶
type AVAudioEnvironmentDistanceAttenuationModel int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEnvironmentDistanceAttenuationModel
const ( // AVAudioEnvironmentDistanceAttenuationModelExponential: An exponential model that describes the drop-off in gain as the source moves away from the listener. AVAudioEnvironmentDistanceAttenuationModelExponential AVAudioEnvironmentDistanceAttenuationModel = 1 // AVAudioEnvironmentDistanceAttenuationModelInverse: An inverse model that describes the drop-off in gain as the source moves away from the listener. AVAudioEnvironmentDistanceAttenuationModelInverse AVAudioEnvironmentDistanceAttenuationModel = 2 // AVAudioEnvironmentDistanceAttenuationModelLinear: A linear model that describes the drop-off in gain as the source moves away from the listener. AVAudioEnvironmentDistanceAttenuationModelLinear AVAudioEnvironmentDistanceAttenuationModel = 3 )
func (AVAudioEnvironmentDistanceAttenuationModel) String ¶
func (e AVAudioEnvironmentDistanceAttenuationModel) String() string
type AVAudioEnvironmentDistanceAttenuationParameters ¶
type AVAudioEnvironmentDistanceAttenuationParameters struct {
objectivec.Object
}
An object that specifies the amount of attenuation distance, the gradual loss in audio intensity, and other characteristics.
Overview ¶
Getting and Setting the Attenuation Model ¶
- AVAudioEnvironmentDistanceAttenuationParameters.DistanceAttenuationModel: The distance attenuation model that describes the drop-off in gain as the source moves away from the listener.
- AVAudioEnvironmentDistanceAttenuationParameters.SetDistanceAttenuationModel
Getting and Setting the Attenuation Values ¶
- AVAudioEnvironmentDistanceAttenuationParameters.MaximumDistance: The distance beyond which the node applies no further attenuation, in meters.
- AVAudioEnvironmentDistanceAttenuationParameters.SetMaximumDistance
- AVAudioEnvironmentDistanceAttenuationParameters.ReferenceDistance: The minimum distance at which the node applies attenuation, in meters.
- AVAudioEnvironmentDistanceAttenuationParameters.SetReferenceDistance
- AVAudioEnvironmentDistanceAttenuationParameters.RolloffFactor: A factor that determines the attenuation curve.
- AVAudioEnvironmentDistanceAttenuationParameters.SetRolloffFactor
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEnvironmentDistanceAttenuationParameters
func AVAudioEnvironmentDistanceAttenuationParametersFromID ¶
func AVAudioEnvironmentDistanceAttenuationParametersFromID(id objc.ID) AVAudioEnvironmentDistanceAttenuationParameters
AVAudioEnvironmentDistanceAttenuationParametersFromID constructs a AVAudioEnvironmentDistanceAttenuationParameters from an objc.ID.
An object that specifies the amount of attenuation distance, the gradual loss in audio intensity, and other characteristics.
func NewAVAudioEnvironmentDistanceAttenuationParameters ¶
func NewAVAudioEnvironmentDistanceAttenuationParameters() AVAudioEnvironmentDistanceAttenuationParameters
NewAVAudioEnvironmentDistanceAttenuationParameters creates a new AVAudioEnvironmentDistanceAttenuationParameters instance.
func (AVAudioEnvironmentDistanceAttenuationParameters) Autorelease ¶
func (a AVAudioEnvironmentDistanceAttenuationParameters) Autorelease() AVAudioEnvironmentDistanceAttenuationParameters
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioEnvironmentDistanceAttenuationParameters) DistanceAttenuationModel ¶
func (a AVAudioEnvironmentDistanceAttenuationParameters) DistanceAttenuationModel() AVAudioEnvironmentDistanceAttenuationModel
The distance attenuation model that describes the drop-off in gain as the source moves away from the listener.
Discussion ¶
The default value is the [AudioEnvironmentDistanceAttenuationModelInverse] attenuation model.
func (AVAudioEnvironmentDistanceAttenuationParameters) Init ¶
func (a AVAudioEnvironmentDistanceAttenuationParameters) Init() AVAudioEnvironmentDistanceAttenuationParameters
Init initializes the instance.
func (AVAudioEnvironmentDistanceAttenuationParameters) MaximumDistance ¶
func (a AVAudioEnvironmentDistanceAttenuationParameters) MaximumDistance() float32
The distance beyond which the node applies no further attenuation, in meters.
Discussion ¶
The default value is `100000.0` meters.
This property is relevant for [AudioEnvironmentDistanceAttenuationModelInverse].
func (AVAudioEnvironmentDistanceAttenuationParameters) ReferenceDistance ¶
func (a AVAudioEnvironmentDistanceAttenuationParameters) ReferenceDistance() float32
The minimum distance at which the node applies attenuation, in meters.
Discussion ¶
The default value is `1.0` meter.
This property is relevant for [AudioEnvironmentDistanceAttenuationModelInverse] and [AudioEnvironmentDistanceAttenuationModelLinear].
func (AVAudioEnvironmentDistanceAttenuationParameters) RolloffFactor ¶
func (a AVAudioEnvironmentDistanceAttenuationParameters) RolloffFactor() float32
A factor that determines the attenuation curve.
Discussion ¶
A higher value results in a steeper attenuation curve. The default value is `1.0`, and the value must be greater than `0.0`.
This property is relevant for [AudioEnvironmentDistanceAttenuationModelExponential], [AudioEnvironmentDistanceAttenuationModelInverse], and [AudioEnvironmentDistanceAttenuationModelLinear].
func (AVAudioEnvironmentDistanceAttenuationParameters) SetDistanceAttenuationModel ¶
func (a AVAudioEnvironmentDistanceAttenuationParameters) SetDistanceAttenuationModel(value AVAudioEnvironmentDistanceAttenuationModel)
func (AVAudioEnvironmentDistanceAttenuationParameters) SetMaximumDistance ¶
func (a AVAudioEnvironmentDistanceAttenuationParameters) SetMaximumDistance(value float32)
func (AVAudioEnvironmentDistanceAttenuationParameters) SetReferenceDistance ¶
func (a AVAudioEnvironmentDistanceAttenuationParameters) SetReferenceDistance(value float32)
func (AVAudioEnvironmentDistanceAttenuationParameters) SetRolloffFactor ¶
func (a AVAudioEnvironmentDistanceAttenuationParameters) SetRolloffFactor(value float32)
type AVAudioEnvironmentDistanceAttenuationParametersClass ¶
type AVAudioEnvironmentDistanceAttenuationParametersClass struct {
// contains filtered or unexported fields
}
func GetAVAudioEnvironmentDistanceAttenuationParametersClass ¶
func GetAVAudioEnvironmentDistanceAttenuationParametersClass() AVAudioEnvironmentDistanceAttenuationParametersClass
GetAVAudioEnvironmentDistanceAttenuationParametersClass returns the class object for AVAudioEnvironmentDistanceAttenuationParameters.
func (AVAudioEnvironmentDistanceAttenuationParametersClass) Alloc ¶
func (ac AVAudioEnvironmentDistanceAttenuationParametersClass) Alloc() AVAudioEnvironmentDistanceAttenuationParameters
Alloc allocates memory for a new instance of the class.
func (AVAudioEnvironmentDistanceAttenuationParametersClass) Class ¶
func (ac AVAudioEnvironmentDistanceAttenuationParametersClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioEnvironmentNode ¶
type AVAudioEnvironmentNode struct {
AVAudioNode
}
An object that simulates a 3D audio environment.
Overview ¶
The AVAudioEnvironmentNode class is a mixer node that simulates a 3D audio environment. Any node that conforms to AVAudioMixing can act as a source node, such as AVAudioPlayerNode.
The environment node has an implicit listener. You set the listener’s position and orientation, and the system then controls the way the user experiences the virtual world.
To help characterize the environment, this class defines properties for distance attenuation and reverberation.
AVAudio3DMixingSourceMode affects how inputs with different channel configurations render. Spatialization applies only to inputs with a mono channel connection format. This class doesn’t spatialize stereo inputs or support inputs with connection formats of more than two channels.
To set the node’s output to a multichannel format, use an AVAudioFormat that has one of the following Audio Channel Layout Tags:
- [KAudioChannelLayoutTag_AudioUnit_4] - [KAudioChannelLayoutTag_AudioUnit_5_0] - [KAudioChannelLayoutTag_AudioUnit_6_0] - [KAudioChannelLayoutTag_AudioUnit_7_0] - [KAudioChannelLayoutTag_AudioUnit_7_0_Front] - [KAudioChannelLayoutTag_AudioUnit_8]
Getting and Setting Positional Properties ¶
- AVAudioEnvironmentNode.ListenerPosition: The listener’s position in the 3D environment.
- AVAudioEnvironmentNode.SetListenerPosition
- AVAudioEnvironmentNode.ListenerAngularOrientation: The listener’s angular orientation in the environment.
- AVAudioEnvironmentNode.SetListenerAngularOrientation
- AVAudioEnvironmentNode.ListenerVectorOrientation: The listener’s vector orientation in the environment.
- AVAudioEnvironmentNode.SetListenerVectorOrientation
Getting Attenuation and Reverb Properties ¶
- AVAudioEnvironmentNode.DistanceAttenuationParameters: The distance attenuation parameters for the environment.
- AVAudioEnvironmentNode.ReverbParameters: The reverb parameters for the environment.
Getting and Setting Environment Properties ¶
- AVAudioEnvironmentNode.OutputVolume: The mixer’s output volume.
- AVAudioEnvironmentNode.SetOutputVolume
- AVAudioEnvironmentNode.OutputType: The type of output hardware.
- AVAudioEnvironmentNode.SetOutputType
Getting the Available Rendering Algorithms ¶
- AVAudioEnvironmentNode.ApplicableRenderingAlgorithms: An array of rendering algorithms applicable to the environment node.
Getting the Head Tracking Status ¶
- AVAudioEnvironmentNode.ListenerHeadTrackingEnabled: A Boolean value that indicates whether the listener orientation is automatically rotated based on head orientation.
- AVAudioEnvironmentNode.SetListenerHeadTrackingEnabled
Getting the Input Bus ¶
- AVAudioEnvironmentNode.NextAvailableInputBus: An unused input bus.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEnvironmentNode
func AVAudioEnvironmentNodeFromID ¶
func AVAudioEnvironmentNodeFromID(id objc.ID) AVAudioEnvironmentNode
AVAudioEnvironmentNodeFromID constructs a AVAudioEnvironmentNode from an objc.ID.
An object that simulates a 3D audio environment.
func NewAVAudioEnvironmentNode ¶
func NewAVAudioEnvironmentNode() AVAudioEnvironmentNode
NewAVAudioEnvironmentNode creates a new AVAudioEnvironmentNode instance.
func (AVAudioEnvironmentNode) ApplicableRenderingAlgorithms ¶
func (a AVAudioEnvironmentNode) ApplicableRenderingAlgorithms() []foundation.NSNumber
An array of rendering algorithms applicable to the environment node.
Discussion ¶
The AVAudioEnvironmentNode class supports several rendering algorithms for each input bus as AVAudio3DMixingRenderingAlgorithm defines.
Depending on the current output format of the environment node, this method returns an immutable array of the applicable rendering algorithms. This subset of applicable rendering algorithms is important when you configure the environment node to a multichannel output format because only a subset of the algorithms render to all of the channels.
Retrieve the applicable algorithms after a successful connection to the destination node through one of the AVAudioEngine connect methods.
func (AVAudioEnvironmentNode) Autorelease ¶
func (a AVAudioEnvironmentNode) Autorelease() AVAudioEnvironmentNode
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioEnvironmentNode) DestinationForMixerBus ¶
func (a AVAudioEnvironmentNode) DestinationForMixerBus(mixer IAVAudioNode, bus AVAudioNodeBus) IAVAudioMixingDestination
Gets the audio mixing destination object that corresponds to the specified mixer node and input bus.
mixer: The mixer to get destination details for.
bus: The input bus.
Return Value ¶
Returns `self` if the specified mixer or input bus matches its connection point. If the mixer or input bus doesn’t match its connection point, or if the source node isn’t in a connected state to the mixer or input bus, the method returns `nil.`
Discussion ¶
When you connect a source node to multiple mixers downstream, setting AVAudioMixing properties directly on the source node applies the change to all of them. Use this method to get the corresponding AVAudioMixingDestination for a specific mixer. Properties set on individual destination instances don’t reflect at the source node level.
If there’s any disconnection between the source and mixer nodes, the return value can be invalid. Fetch the return value every time you want to set or get properties on a specific mixer.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing/destination(forMixer:bus:)
func (AVAudioEnvironmentNode) DistanceAttenuationParameters ¶
func (a AVAudioEnvironmentNode) DistanceAttenuationParameters() IAVAudioEnvironmentDistanceAttenuationParameters
The distance attenuation parameters for the environment.
func (AVAudioEnvironmentNode) Init ¶
func (a AVAudioEnvironmentNode) Init() AVAudioEnvironmentNode
Init initializes the instance.
func (AVAudioEnvironmentNode) KAudioChannelLayoutTag_AudioUnit_4 ¶
func (a AVAudioEnvironmentNode) KAudioChannelLayoutTag_AudioUnit_4() objectivec.IObject
A quadraphonic symmetrical layout, recommended for use by audio units.
See: https://developer.apple.com/documentation/CoreAudioTypes/kAudioChannelLayoutTag_AudioUnit_4
func (AVAudioEnvironmentNode) KAudioChannelLayoutTag_AudioUnit_5_0 ¶
func (a AVAudioEnvironmentNode) KAudioChannelLayoutTag_AudioUnit_5_0() objectivec.IObject
A 5-channel surround-based layout, recommended for use by audio units.
See: https://developer.apple.com/documentation/CoreAudioTypes/kAudioChannelLayoutTag_AudioUnit_5_0
func (AVAudioEnvironmentNode) KAudioChannelLayoutTag_AudioUnit_6_0 ¶
func (a AVAudioEnvironmentNode) KAudioChannelLayoutTag_AudioUnit_6_0() objectivec.IObject
A 6-channel surround-based layout, recommended for use by audio units.
See: https://developer.apple.com/documentation/CoreAudioTypes/kAudioChannelLayoutTag_AudioUnit_6_0
func (AVAudioEnvironmentNode) KAudioChannelLayoutTag_AudioUnit_7_0 ¶
func (a AVAudioEnvironmentNode) KAudioChannelLayoutTag_AudioUnit_7_0() objectivec.IObject
A 7-channel surround-based layout, recommended for use by audio units.
See: https://developer.apple.com/documentation/CoreAudioTypes/kAudioChannelLayoutTag_AudioUnit_7_0
func (AVAudioEnvironmentNode) KAudioChannelLayoutTag_AudioUnit_7_0_Front ¶
func (a AVAudioEnvironmentNode) KAudioChannelLayoutTag_AudioUnit_7_0_Front() objectivec.IObject
An alternate 7-channel surround-based layout, for use by audio units.
See: https://developer.apple.com/documentation/CoreAudioTypes/kAudioChannelLayoutTag_AudioUnit_7_0_Front
func (AVAudioEnvironmentNode) KAudioChannelLayoutTag_AudioUnit_8 ¶
func (a AVAudioEnvironmentNode) KAudioChannelLayoutTag_AudioUnit_8() objectivec.IObject
An octagonal symmetrical layout, recommended for use by audio units.
See: https://developer.apple.com/documentation/CoreAudioTypes/kAudioChannelLayoutTag_AudioUnit_8
func (AVAudioEnvironmentNode) ListenerAngularOrientation ¶
func (a AVAudioEnvironmentNode) ListenerAngularOrientation() AVAudio3DAngularOrientation
The listener’s angular orientation in the environment.
Discussion ¶
The system specifies all angles in degrees.
The default orientation is with the listener looking directly along the negative z-axis (forward). This orientation has a yaw of `0.0` degrees, a pitch of `0.0` degrees, and a roll of `0.0` degrees.
Changing this property results in a corresponding change in the [ListenerVectorOrientation] property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEnvironmentNode/listenerAngularOrientation
func (AVAudioEnvironmentNode) ListenerHeadTrackingEnabled ¶
func (a AVAudioEnvironmentNode) ListenerHeadTrackingEnabled() bool
A Boolean value that indicates whether the listener orientation is automatically rotated based on head orientation.
Discussion ¶
To enable head tracking, your app must have the com.apple.developer.coremotion.head-pose entitlement.
Set this value to true to enable head tracking with compatible AirPods.
func (AVAudioEnvironmentNode) ListenerPosition ¶
func (a AVAudioEnvironmentNode) ListenerPosition() AVAudio3DPoint
The listener’s position in the 3D environment.
Discussion ¶
The system specifies the coordinates in meters.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEnvironmentNode/listenerPosition
func (AVAudioEnvironmentNode) ListenerVectorOrientation ¶
func (a AVAudioEnvironmentNode) ListenerVectorOrientation() AVAudio3DVectorOrientation
The listener’s vector orientation in the environment.
Discussion ¶
The default orientation is with the listener looking directly along the negative z-axis (forward).
The node expresses a forward vector orientation as `(0, 0, -1)`, and an up vector as `(0, 1, 0)`.
Changing this property results in a corresponding change in the [ListenerAngularOrientation] property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEnvironmentNode/listenerVectorOrientation
func (AVAudioEnvironmentNode) NextAvailableInputBus ¶
func (a AVAudioEnvironmentNode) NextAvailableInputBus() AVAudioNodeBus
An unused input bus.
Discussion ¶
This method finds and returns the first input bus that doesn’t have a connection with a node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEnvironmentNode/nextAvailableInputBus
func (AVAudioEnvironmentNode) Obstruction ¶
func (a AVAudioEnvironmentNode) Obstruction() float32
A value that simulates filtering of the direct path of sound due to an obstacle.
Discussion ¶
The value of `obstruction` is in decibels. The system blocks only the direct path of sound between the source and listener.
The default value is `0.0`, and the range of valid values is `-100` to `0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/obstruction
func (AVAudioEnvironmentNode) Occlusion ¶
func (a AVAudioEnvironmentNode) Occlusion() float32
A value that simulates filtering of the direct and reverb paths of sound due to an obstacle.
Discussion ¶
The value of `obstruction` is in decibels. The system blocks the direct and reverb paths of sound between the source and listener.
The default value is `0.0`, and the range of valid values is `-100` to `0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/occlusion
func (AVAudioEnvironmentNode) OutputType ¶
func (a AVAudioEnvironmentNode) OutputType() AVAudioEnvironmentOutputType
The type of output hardware.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEnvironmentNode/outputType
func (AVAudioEnvironmentNode) OutputVolume ¶
func (a AVAudioEnvironmentNode) OutputVolume() float32
The mixer’s output volume.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEnvironmentNode/outputVolume
func (AVAudioEnvironmentNode) Pan ¶
func (a AVAudioEnvironmentNode) Pan() float32
The bus’s stereo pan.
Discussion ¶
The default value is `0.0`, and the range of valid values is `-1.0` to `1.0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioStereoMixing/pan
func (AVAudioEnvironmentNode) PointSourceInHeadMode ¶
func (a AVAudioEnvironmentNode) PointSourceInHeadMode() AVAudio3DMixingPointSourceInHeadMode
The in-head mode for a point source.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/pointSourceInHeadMode
func (AVAudioEnvironmentNode) Position ¶
func (a AVAudioEnvironmentNode) Position() AVAudio3DPoint
The location of the source in the 3D environment.
Discussion ¶
The system specifies the coordinates in meters. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/position
func (AVAudioEnvironmentNode) Rate ¶
func (a AVAudioEnvironmentNode) Rate() float32
A value that changes the playback rate of the input signal.
Discussion ¶
A value of `2.0` results in the output audio playing one octave higher. A value of `0.5` results in the output audio playing one octave lower.
The default value is `1.0`, and the range of valid values is `0.5` to `2.0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/rate
func (AVAudioEnvironmentNode) RenderingAlgorithm ¶
func (a AVAudioEnvironmentNode) RenderingAlgorithm() AVAudio3DMixingRenderingAlgorithm
The type of rendering algorithm the mixer uses.
Discussion ¶
Depending on the current output format of the AVAudioEnvironmentNode instance, the system may only support a subset of the rendering algorithms. You can retrieve an array of valid rendering algorithms by calling the [ApplicableRenderingAlgorithms] function of the AVAudioEnvironmentNode instance.
The default rendering algorithm is [Audio3DMixingRenderingAlgorithmEqualPowerPanning]. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/renderingAlgorithm
func (AVAudioEnvironmentNode) ReverbBlend ¶
func (a AVAudioEnvironmentNode) ReverbBlend() float32
A value that controls the blend of dry and reverb processed audio.
Discussion ¶
This property controls the amount of the source’s audio that the AVAudioEnvironmentNode instance processes. A value of `0.5` results in an equal blend of dry and processed (wet) audio.
The default is `0.0`, and the range of valid values is `0.0` (completely dry) to `1.0` (completely wet). Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/reverbBlend
func (AVAudioEnvironmentNode) ReverbParameters ¶
func (a AVAudioEnvironmentNode) ReverbParameters() IAVAudioEnvironmentReverbParameters
The reverb parameters for the environment.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEnvironmentNode/reverbParameters
func (AVAudioEnvironmentNode) SetKAudioChannelLayoutTag_AudioUnit_4 ¶
func (a AVAudioEnvironmentNode) SetKAudioChannelLayoutTag_AudioUnit_4(value objectivec.IObject)
func (AVAudioEnvironmentNode) SetKAudioChannelLayoutTag_AudioUnit_5_0 ¶
func (a AVAudioEnvironmentNode) SetKAudioChannelLayoutTag_AudioUnit_5_0(value objectivec.IObject)
func (AVAudioEnvironmentNode) SetKAudioChannelLayoutTag_AudioUnit_6_0 ¶
func (a AVAudioEnvironmentNode) SetKAudioChannelLayoutTag_AudioUnit_6_0(value objectivec.IObject)
func (AVAudioEnvironmentNode) SetKAudioChannelLayoutTag_AudioUnit_7_0 ¶
func (a AVAudioEnvironmentNode) SetKAudioChannelLayoutTag_AudioUnit_7_0(value objectivec.IObject)
func (AVAudioEnvironmentNode) SetKAudioChannelLayoutTag_AudioUnit_7_0_Front ¶
func (a AVAudioEnvironmentNode) SetKAudioChannelLayoutTag_AudioUnit_7_0_Front(value objectivec.IObject)
func (AVAudioEnvironmentNode) SetKAudioChannelLayoutTag_AudioUnit_8 ¶
func (a AVAudioEnvironmentNode) SetKAudioChannelLayoutTag_AudioUnit_8(value objectivec.IObject)
func (AVAudioEnvironmentNode) SetListenerAngularOrientation ¶
func (a AVAudioEnvironmentNode) SetListenerAngularOrientation(value AVAudio3DAngularOrientation)
func (AVAudioEnvironmentNode) SetListenerHeadTrackingEnabled ¶
func (a AVAudioEnvironmentNode) SetListenerHeadTrackingEnabled(value bool)
func (AVAudioEnvironmentNode) SetListenerPosition ¶
func (a AVAudioEnvironmentNode) SetListenerPosition(value AVAudio3DPoint)
func (AVAudioEnvironmentNode) SetListenerVectorOrientation ¶
func (a AVAudioEnvironmentNode) SetListenerVectorOrientation(value AVAudio3DVectorOrientation)
func (AVAudioEnvironmentNode) SetObstruction ¶
func (a AVAudioEnvironmentNode) SetObstruction(value float32)
func (AVAudioEnvironmentNode) SetOcclusion ¶
func (a AVAudioEnvironmentNode) SetOcclusion(value float32)
func (AVAudioEnvironmentNode) SetOutputType ¶
func (a AVAudioEnvironmentNode) SetOutputType(value AVAudioEnvironmentOutputType)
func (AVAudioEnvironmentNode) SetOutputVolume ¶
func (a AVAudioEnvironmentNode) SetOutputVolume(value float32)
func (AVAudioEnvironmentNode) SetPan ¶
func (a AVAudioEnvironmentNode) SetPan(value float32)
func (AVAudioEnvironmentNode) SetPointSourceInHeadMode ¶
func (a AVAudioEnvironmentNode) SetPointSourceInHeadMode(value AVAudio3DMixingPointSourceInHeadMode)
func (AVAudioEnvironmentNode) SetPosition ¶
func (a AVAudioEnvironmentNode) SetPosition(value AVAudio3DPoint)
func (AVAudioEnvironmentNode) SetRate ¶
func (a AVAudioEnvironmentNode) SetRate(value float32)
func (AVAudioEnvironmentNode) SetRenderingAlgorithm ¶
func (a AVAudioEnvironmentNode) SetRenderingAlgorithm(value AVAudio3DMixingRenderingAlgorithm)
func (AVAudioEnvironmentNode) SetReverbBlend ¶
func (a AVAudioEnvironmentNode) SetReverbBlend(value float32)
func (AVAudioEnvironmentNode) SetSourceMode ¶
func (a AVAudioEnvironmentNode) SetSourceMode(value AVAudio3DMixingSourceMode)
func (AVAudioEnvironmentNode) SetVolume ¶
func (a AVAudioEnvironmentNode) SetVolume(value float32)
func (AVAudioEnvironmentNode) SourceMode ¶
func (a AVAudioEnvironmentNode) SourceMode() AVAudio3DMixingSourceMode
The source mode for the input bus of the audio environment node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/sourceMode
func (AVAudioEnvironmentNode) Volume ¶
func (a AVAudioEnvironmentNode) Volume() float32
The bus’s input volume.
Discussion ¶
The default value is `1.0`, and the range of valid values is `0.0` to `1.0`. Only the AVAudioEnvironmentNode and the AVAudioMixerNode implement this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing/volume
type AVAudioEnvironmentNodeClass ¶
type AVAudioEnvironmentNodeClass struct {
// contains filtered or unexported fields
}
func GetAVAudioEnvironmentNodeClass ¶
func GetAVAudioEnvironmentNodeClass() AVAudioEnvironmentNodeClass
GetAVAudioEnvironmentNodeClass returns the class object for AVAudioEnvironmentNode.
func (AVAudioEnvironmentNodeClass) Alloc ¶
func (ac AVAudioEnvironmentNodeClass) Alloc() AVAudioEnvironmentNode
Alloc allocates memory for a new instance of the class.
func (AVAudioEnvironmentNodeClass) Class ¶
func (ac AVAudioEnvironmentNodeClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioEnvironmentOutputType ¶
type AVAudioEnvironmentOutputType int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEnvironmentOutputType
const ( // AVAudioEnvironmentOutputTypeAuto: Automatically detects the playback route and picks the correct output. AVAudioEnvironmentOutputTypeAuto AVAudioEnvironmentOutputType = 0 // AVAudioEnvironmentOutputTypeBuiltInSpeakers: Renders the audio output for built-in speakers on the current hardware. AVAudioEnvironmentOutputTypeBuiltInSpeakers AVAudioEnvironmentOutputType = 2 // AVAudioEnvironmentOutputTypeExternalSpeakers: Renders the audio output for external speakers according to the audio environment node’s output channel layout. AVAudioEnvironmentOutputTypeExternalSpeakers AVAudioEnvironmentOutputType = 3 // AVAudioEnvironmentOutputTypeHeadphones: Renders the audio output for headphones. AVAudioEnvironmentOutputTypeHeadphones AVAudioEnvironmentOutputType = 1 )
func (AVAudioEnvironmentOutputType) String ¶
func (e AVAudioEnvironmentOutputType) String() string
type AVAudioEnvironmentReverbParameters ¶
type AVAudioEnvironmentReverbParameters struct {
objectivec.Object
}
A class that encapsulates the parameters that you use to control the reverb of the environment node class.
Overview ¶
Use reverberation to simulate the acoustic characteristics of an environment. The AVAudioEnvironmentNode class has a built-in reverb that describe the space that the listener is in.
The reverb has a single filter that sits at the end of the chain. You use this filter to shape the overall sound of the reverb. For instance, select one of the reverb presets to simulate the general space, and then use the filter to brighten or darken the overall sound.
You can’t create a standalone instance of AVAudioEnvironmentReverbParameters. Only an instance vended by a source object is valid, such as an AVAudioEnvironmentNode instance.
Enabling and Disabling Reverb ¶
- AVAudioEnvironmentReverbParameters.Enable: A Boolean value that indicates whether reverberation is in an enabled state.
- AVAudioEnvironmentReverbParameters.SetEnable
Getting and Setting Reverb Values ¶
- AVAudioEnvironmentReverbParameters.Level: Controls the amount of reverb, in decibels.
- AVAudioEnvironmentReverbParameters.SetLevel
- AVAudioEnvironmentReverbParameters.FilterParameters: A filter that the system applies to the output.
- AVAudioEnvironmentReverbParameters.LoadFactoryReverbPreset: Loads one of the reverbs factory presets.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEnvironmentReverbParameters
func AVAudioEnvironmentReverbParametersFromID ¶
func AVAudioEnvironmentReverbParametersFromID(id objc.ID) AVAudioEnvironmentReverbParameters
AVAudioEnvironmentReverbParametersFromID constructs a AVAudioEnvironmentReverbParameters from an objc.ID.
A class that encapsulates the parameters that you use to control the reverb of the environment node class.
func NewAVAudioEnvironmentReverbParameters ¶
func NewAVAudioEnvironmentReverbParameters() AVAudioEnvironmentReverbParameters
NewAVAudioEnvironmentReverbParameters creates a new AVAudioEnvironmentReverbParameters instance.
func (AVAudioEnvironmentReverbParameters) Autorelease ¶
func (a AVAudioEnvironmentReverbParameters) Autorelease() AVAudioEnvironmentReverbParameters
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioEnvironmentReverbParameters) Enable ¶
func (a AVAudioEnvironmentReverbParameters) Enable() bool
A Boolean value that indicates whether reverberation is in an enabled state.
Discussion ¶
The default value is false.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEnvironmentReverbParameters/enable
func (AVAudioEnvironmentReverbParameters) FilterParameters ¶
func (a AVAudioEnvironmentReverbParameters) FilterParameters() IAVAudioUnitEQFilterParameters
A filter that the system applies to the output.
func (AVAudioEnvironmentReverbParameters) Init ¶
func (a AVAudioEnvironmentReverbParameters) Init() AVAudioEnvironmentReverbParameters
Init initializes the instance.
func (AVAudioEnvironmentReverbParameters) Level ¶
func (a AVAudioEnvironmentReverbParameters) Level() float32
Controls the amount of reverb, in decibels.
Discussion ¶
The default value is `0.0`. The values must be within the range of `-40` to `40` dB.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEnvironmentReverbParameters/level
func (AVAudioEnvironmentReverbParameters) LoadFactoryReverbPreset ¶
func (a AVAudioEnvironmentReverbParameters) LoadFactoryReverbPreset(preset AVAudioUnitReverbPreset)
Loads one of the reverbs factory presets.
preset: A reverb preset to load.
Discussion ¶
Loading a factory reverb preset changes the sound of the reverb. This is independent of the filter which follows the reverb in the signal chain.
func (AVAudioEnvironmentReverbParameters) SetEnable ¶
func (a AVAudioEnvironmentReverbParameters) SetEnable(value bool)
func (AVAudioEnvironmentReverbParameters) SetLevel ¶
func (a AVAudioEnvironmentReverbParameters) SetLevel(value float32)
type AVAudioEnvironmentReverbParametersClass ¶
type AVAudioEnvironmentReverbParametersClass struct {
// contains filtered or unexported fields
}
func GetAVAudioEnvironmentReverbParametersClass ¶
func GetAVAudioEnvironmentReverbParametersClass() AVAudioEnvironmentReverbParametersClass
GetAVAudioEnvironmentReverbParametersClass returns the class object for AVAudioEnvironmentReverbParameters.
func (AVAudioEnvironmentReverbParametersClass) Alloc ¶
func (ac AVAudioEnvironmentReverbParametersClass) Alloc() AVAudioEnvironmentReverbParameters
Alloc allocates memory for a new instance of the class.
func (AVAudioEnvironmentReverbParametersClass) Class ¶
func (ac AVAudioEnvironmentReverbParametersClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioFile ¶
type AVAudioFile struct {
objectivec.Object
}
An object that represents an audio file that the system can open for reading or writing.
Overview ¶
Regardless of the file format, you read and write using AVAudioPCMBuffer objects. These objects contain samples as AVAudioCommonFormat that the framework refers to as the file’s processing format. You convert to and from using the file’s actual format.
Reads and writes are always sequential. Random access is possible by setting the AVAudioFile.FramePosition property.
Creating an Audio File ¶
- AVAudioFile.InitForReadingError: Opens a file for reading using the standard, deinterleaved floating point format.
- AVAudioFile.InitForReadingCommonFormatInterleavedError: Opens a file for reading using the specified processing format.
- AVAudioFile.InitForWritingSettingsError: Opens a file for writing using the specified settings.
- AVAudioFile.InitForWritingSettingsCommonFormatInterleavedError: Opens a file for writing using a specified processing format and settings.
Reading and Writing the Audio Buffer ¶
- AVAudioFile.ReadIntoBufferError: Reads an entire audio buffer.
- AVAudioFile.ReadIntoBufferFrameCountError: Reads a portion of an audio buffer using the number of frames you specify.
- AVAudioFile.WriteFromBufferError: Writes an audio buffer sequentially.
- AVAudioFile.Close: Closes the audio file.
Getting Audio File Properties ¶
- AVAudioFile.Url: The location of the audio file.
- AVAudioFile.FileFormat: The on-disk format of the file.
- AVAudioFile.ProcessingFormat: The processing format of the file.
- AVAudioFile.Length: The number of sample frames in the file.
- AVAudioFile.FramePosition: The position in the file where the next read or write operation occurs.
- AVAudioFile.SetFramePosition
- AVAudioFile.AVAudioFileTypeKey: A string that indicates the audio file type.
- AVAudioFile.IsOpen: A Boolean value that indicates whether the file is open.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFile
func AVAudioFileFromID ¶
func AVAudioFileFromID(id objc.ID) AVAudioFile
AVAudioFileFromID constructs a AVAudioFile from an objc.ID.
An object that represents an audio file that the system can open for reading or writing.
func NewAVAudioFile ¶
func NewAVAudioFile() AVAudioFile
NewAVAudioFile creates a new AVAudioFile instance.
func NewAudioFileForReadingCommonFormatInterleavedError ¶
func NewAudioFileForReadingCommonFormatInterleavedError(fileURL foundation.INSURL, format AVAudioCommonFormat, interleaved bool) (AVAudioFile, error)
Opens a file for reading using the specified processing format.
fileURL: The file to read.
format: The processing format to use when reading from the file.
interleaved: The Boolean value that indicates whether to use an interleaved processing format.
Return Value ¶
A new AVAudioFile instance you use for reading.
Discussion ¶
The processing format refers to the buffers it reads from the file. The system reads the content and converts from the file format to the processing format. The processing format must be at the same sample rate as the actual file contents, and must be linear PCM. The interleaved parameter determines whether the processing buffer is in an interleaved float format.
func NewAudioFileForReadingError ¶
func NewAudioFileForReadingError(fileURL foundation.INSURL) (AVAudioFile, error)
Opens a file for reading using the standard, deinterleaved floating point format.
fileURL: The file to read.
Return Value ¶
A new AVAudioFile instance you use for reading.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFile/init(forReading:)
func NewAudioFileForWritingSettingsCommonFormatInterleavedError ¶
func NewAudioFileForWritingSettingsCommonFormatInterleavedError(fileURL foundation.INSURL, settings foundation.INSDictionary, format AVAudioCommonFormat, interleaved bool) (AVAudioFile, error)
Opens a file for writing using a specified processing format and settings.
fileURL: The path at which to create the file.
settings: The format of the file to create.
format: The processing format to use when writing to the file.
interleaved: The Boolean value that indicates whether to use an interleaved processing format.
Return Value ¶
A new AVAudioFile instance for writing.
Discussion ¶
This method infers the file type to create from the file extension of `fileURL`, and overwrites a file at the specified URL if a file exists.
For more information about the `settings` parameter, see the [Settings] property in the AVAudioRecorder class.
func NewAudioFileForWritingSettingsError ¶
func NewAudioFileForWritingSettingsError(fileURL foundation.INSURL, settings foundation.INSDictionary) (AVAudioFile, error)
Opens a file for writing using the specified settings.
fileURL: The path of the file to create for writing.
settings: The format of the file to create.
Return Value ¶
A new AVAudioFile instance for writing.
Discussion ¶
This method infers the file type to create from the file extension of `fileURL`, and overwrites a file at the specified URL if a file exists.
The file opens for writing using the standard format [AudioPCMFormatFloat32]. For more information about the `settings` parameter, see the [Settings] property in the AVAudioRecorder class.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFile/init(forWriting:settings:)
func (AVAudioFile) AVAudioFileTypeKey ¶
func (a AVAudioFile) AVAudioFileTypeKey() string
A string that indicates the audio file type.
See: https://developer.apple.com/documentation/avfaudio/avaudiofiletypekey
func (AVAudioFile) Autorelease ¶
func (a AVAudioFile) Autorelease() AVAudioFile
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioFile) Close ¶
func (a AVAudioFile) Close()
Closes the audio file.
Discussion ¶
Calling this method closes the underlying file, if open. It’s normally unnecessary to close a file opened for reading because it’s automatically closed when released. It’s only necessary to close a file opened for writing in order to achieve specific control over when the file’s header is updated.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFile/close()
func (AVAudioFile) FileFormat ¶
func (a AVAudioFile) FileFormat() IAVAudioFormat
The on-disk format of the file.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFile/fileFormat
func (AVAudioFile) FramePosition ¶
func (a AVAudioFile) FramePosition() AVAudioFramePosition
The position in the file where the next read or write operation occurs.
Discussion ¶
Set the `framePosition` property to perform a seek before a read or write. A read or write operation advances the frame position value by the number of frames it reads or writes.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFile/framePosition
func (AVAudioFile) InitForReadingCommonFormatInterleavedError ¶
func (a AVAudioFile) InitForReadingCommonFormatInterleavedError(fileURL foundation.INSURL, format AVAudioCommonFormat, interleaved bool) (AVAudioFile, error)
Opens a file for reading using the specified processing format.
fileURL: The file to read.
format: The processing format to use when reading from the file.
interleaved: The Boolean value that indicates whether to use an interleaved processing format.
Return Value ¶
A new AVAudioFile instance you use for reading.
Discussion ¶
The processing format refers to the buffers it reads from the file. The system reads the content and converts from the file format to the processing format. The processing format must be at the same sample rate as the actual file contents, and must be linear PCM. The interleaved parameter determines whether the processing buffer is in an interleaved float format.
func (AVAudioFile) InitForReadingError ¶
func (a AVAudioFile) InitForReadingError(fileURL foundation.INSURL) (AVAudioFile, error)
Opens a file for reading using the standard, deinterleaved floating point format.
fileURL: The file to read.
Return Value ¶
A new AVAudioFile instance you use for reading.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFile/init(forReading:)
func (AVAudioFile) InitForWritingSettingsCommonFormatInterleavedError ¶
func (a AVAudioFile) InitForWritingSettingsCommonFormatInterleavedError(fileURL foundation.INSURL, settings foundation.INSDictionary, format AVAudioCommonFormat, interleaved bool) (AVAudioFile, error)
Opens a file for writing using a specified processing format and settings.
fileURL: The path at which to create the file.
settings: The format of the file to create.
format: The processing format to use when writing to the file.
interleaved: The Boolean value that indicates whether to use an interleaved processing format.
Return Value ¶
A new AVAudioFile instance for writing.
Discussion ¶
This method infers the file type to create from the file extension of `fileURL`, and overwrites a file at the specified URL if a file exists.
For more information about the `settings` parameter, see the [Settings] property in the AVAudioRecorder class.
func (AVAudioFile) InitForWritingSettingsError ¶
func (a AVAudioFile) InitForWritingSettingsError(fileURL foundation.INSURL, settings foundation.INSDictionary) (AVAudioFile, error)
Opens a file for writing using the specified settings.
fileURL: The path of the file to create for writing.
settings: The format of the file to create.
Return Value ¶
A new AVAudioFile instance for writing.
Discussion ¶
This method infers the file type to create from the file extension of `fileURL`, and overwrites a file at the specified URL if a file exists.
The file opens for writing using the standard format [AudioPCMFormatFloat32]. For more information about the `settings` parameter, see the [Settings] property in the AVAudioRecorder class.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFile/init(forWriting:settings:)
func (AVAudioFile) IsOpen ¶
func (a AVAudioFile) IsOpen() bool
A Boolean value that indicates whether the file is open.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFile/isOpen
func (AVAudioFile) Length ¶
func (a AVAudioFile) Length() AVAudioFramePosition
The number of sample frames in the file.
Discussion ¶
This can be computationally expensive to compute for the first time.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFile/length
func (AVAudioFile) ProcessingFormat ¶
func (a AVAudioFile) ProcessingFormat() IAVAudioFormat
The processing format of the file.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFile/processingFormat
func (AVAudioFile) ReadIntoBufferError ¶
func (a AVAudioFile) ReadIntoBufferError(buffer IAVAudioPCMBuffer) (bool, error)
Reads an entire audio buffer.
buffer: The buffer from which to read the file. Its format must match the file’s processing format.
Discussion ¶
When reading sequentially from the [FramePosition] property, the method attempts to fill the buffer to its capacity. On return, the buffer’s [Length] property indicates the number of sample frames it successfully reads.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFile/read(into:)
func (AVAudioFile) ReadIntoBufferFrameCountError ¶
func (a AVAudioFile) ReadIntoBufferFrameCountError(buffer IAVAudioPCMBuffer, frames AVAudioFrameCount) (bool, error)
Reads a portion of an audio buffer using the number of frames you specify.
buffer: The buffer from which to read the file. Its format must match the file’s processing format.
frames: The number of frames to read.
Discussion ¶
You use this method to read fewer frames than the buffer’s `frameCapacity`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFile/read(into:frameCount:)
func (AVAudioFile) SetFramePosition ¶
func (a AVAudioFile) SetFramePosition(value AVAudioFramePosition)
func (AVAudioFile) Url ¶
func (a AVAudioFile) Url() foundation.INSURL
The location of the audio file.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFile/url
func (AVAudioFile) WriteFromBufferError ¶
func (a AVAudioFile) WriteFromBufferError(buffer IAVAudioPCMBuffer) (bool, error)
Writes an audio buffer sequentially.
buffer: The buffer from which to write to the file. Its format must match the file’s processing format.
Discussion ¶
The buffer’s [FrameLength] signifies how much of the buffer the method writes.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFile/write(from:)
type AVAudioFileClass ¶
type AVAudioFileClass struct {
// contains filtered or unexported fields
}
func GetAVAudioFileClass ¶
func GetAVAudioFileClass() AVAudioFileClass
GetAVAudioFileClass returns the class object for AVAudioFile.
func (AVAudioFileClass) Alloc ¶
func (ac AVAudioFileClass) Alloc() AVAudioFile
Alloc allocates memory for a new instance of the class.
func (AVAudioFileClass) Class ¶
func (ac AVAudioFileClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioFormat ¶
type AVAudioFormat struct {
objectivec.Object
}
An object that describes the representation of an audio format.
Overview ¶
The AVAudioFormat class wraps Core Audio’s AudioStreamBasicDescription, and includes convenience initializers and accessors for common formats, including Core Audio’s standard deinterleaved 32-bit floating point format.
Instances of this class are immutable.
Creating a New Audio Format Representation ¶
- AVAudioFormat.InitStandardFormatWithSampleRateChannelLayout: Creates an audio format instance as a deinterleaved float with the specified sample rate and channel layout.
- AVAudioFormat.InitStandardFormatWithSampleRateChannels: Creates an audio format instance with the specified sample rate and channel count.
- AVAudioFormat.InitWithCommonFormatSampleRateChannelsInterleaved: Creates an audio format instance.
- AVAudioFormat.InitWithCommonFormatSampleRateInterleavedChannelLayout: Creates an audio format instance with the specified audio format, sample rate, interleaved state, and channel layout.
- AVAudioFormat.InitWithSettings: Creates an audio format instance using the specified settings dictionary.
- AVAudioFormat.InitWithStreamDescription: Creates an audio format instance from a stream description.
- AVAudioFormat.InitWithStreamDescriptionChannelLayout: Creates an audio format instance from a stream description and channel layout.
- AVAudioFormat.InitWithCMAudioFormatDescription: Creates an audio format instance from a Core Media audio format description.
Getting the Audio Stream Description ¶
- AVAudioFormat.StreamDescription: The audio format properties of a stream of audio data.
Getting Audio Format Values ¶
- AVAudioFormat.SampleRate: The audio format sampling rate, in hertz.
- AVAudioFormat.ChannelCount: The number of channels of audio data.
- AVAudioFormat.ChannelLayout: The underlying audio channel layout.
- AVAudioFormat.FormatDescription: The audio format description to use with Core Media APIs.
Determining the Audio Format ¶
- AVAudioFormat.Interleaved: A Boolean value that indicates whether the samples mix into one stream.
- AVAudioFormat.Standard: A Boolean value that indicates whether the format is in a deinterleaved native-endian float state.
- AVAudioFormat.CommonFormat: The common format identifier instance.
- AVAudioFormat.Settings: A dictionary that represents the format as a dictionary using audio setting keys.
- AVAudioFormat.MagicCookie: An object that contains metadata that encoders and decoders require.
- AVAudioFormat.SetMagicCookie
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFormat
func AVAudioFormatFromID ¶
func AVAudioFormatFromID(id objc.ID) AVAudioFormat
AVAudioFormatFromID constructs a AVAudioFormat from an objc.ID.
An object that describes the representation of an audio format.
func NewAVAudioFormat ¶
func NewAVAudioFormat() AVAudioFormat
NewAVAudioFormat creates a new AVAudioFormat instance.
func NewAudioFormatStandardFormatWithSampleRateChannelLayout ¶
func NewAudioFormatStandardFormatWithSampleRateChannelLayout(sampleRate float64, layout IAVAudioChannelLayout) AVAudioFormat
Creates an audio format instance as a deinterleaved float with the specified sample rate and channel layout.
sampleRate: The sampling rate, in hertz.
layout: The channel layout, which must not be `nil`.
Return Value ¶
A new AVAudioFormat instance.
Discussion ¶
The returned AVAudioFormat instance uses the [AudioPCMFormatFloat32] format.
func NewAudioFormatStandardFormatWithSampleRateChannels ¶
func NewAudioFormatStandardFormatWithSampleRateChannels(sampleRate float64, channels AVAudioChannelCount) AVAudioFormat
Creates an audio format instance with the specified sample rate and channel count.
sampleRate: The sampling rate, in hertz.
channels: The channel count.
Return Value ¶
A new AVAudioFormat instance, or `nil` if the initialization fails.
Discussion ¶
The returned AVAudioFormat instance uses the [AudioPCMFormatFloat32] format.
func NewAudioFormatWithCMAudioFormatDescription ¶
func NewAudioFormatWithCMAudioFormatDescription(formatDescription coremedia.CMFormatDescriptionRef) AVAudioFormat
Creates an audio format instance from a Core Media audio format description.
formatDescription: The Core Media audio format description.
Return Value ¶
A new AVAudioFormat instance, or `nil` if `formatDescription` isn’t valid.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFormat/init(cmAudioFormatDescription:)
func NewAudioFormatWithCommonFormatSampleRateChannelsInterleaved ¶
func NewAudioFormatWithCommonFormatSampleRateChannelsInterleaved(format AVAudioCommonFormat, sampleRate float64, channels AVAudioChannelCount, interleaved bool) AVAudioFormat
Creates an audio format instance.
format: The audio format.
sampleRate: The sampling rate, in hertz.
channels: The channel count.
interleaved: The Boolean value that indicates whether `format` is in an interleaved state.
Return Value ¶
A new AVAudioFormat instance, or `nil` if the initialization fails.
Discussion ¶
For information about possible `format` values, see AVAudioCommonFormat.
func NewAudioFormatWithCommonFormatSampleRateInterleavedChannelLayout ¶
func NewAudioFormatWithCommonFormatSampleRateInterleavedChannelLayout(format AVAudioCommonFormat, sampleRate float64, interleaved bool, layout IAVAudioChannelLayout) AVAudioFormat
Creates an audio format instance with the specified audio format, sample rate, interleaved state, and channel layout.
format: The audio format.
sampleRate: The sampling rate, in hertz.
interleaved: The Boolean value that indicates whether `format` is in an interleaved state.
layout: The channel layout, which must not be `nil`.
Return Value ¶
A new AVAudioFormat instance.
Discussion ¶
For information about possible `format` values, see AVAudioCommonFormat.
func NewAudioFormatWithSettings ¶
func NewAudioFormatWithSettings(settings foundation.INSDictionary) AVAudioFormat
Creates an audio format instance using the specified settings dictionary.
settings: The settings dictionary.
Return Value ¶
A new AVAudioFormat instance, or `nil` if the initialization fails.
Discussion ¶
Note that many settings dictionary elements aren’t relevant for the format, so this method ignores them. For information about supported dictionary values, see Audio settings.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFormat/init(settings:)
func NewAudioFormatWithStreamDescription ¶
func NewAudioFormatWithStreamDescription(asbd objectivec.IObject) AVAudioFormat
Creates an audio format instance from a stream description.
asbd: The audio stream description.
Return Value ¶
A new AVAudioFormat instance, or `nil` if the initialization fails.
Discussion ¶
If the AudioStreamBasicDescription specifies more than two channels, this method fails and returns `nil`. Instead, use the [InitWithStreamDescriptionChannelLayout] method.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFormat/init(streamDescription:)
func NewAudioFormatWithStreamDescriptionChannelLayout ¶
func NewAudioFormatWithStreamDescriptionChannelLayout(asbd objectivec.IObject, layout IAVAudioChannelLayout) AVAudioFormat
Creates an audio format instance from a stream description and channel layout.
asbd: The audio stream description.
layout: The channel layout.
Return Value ¶
A new AVAudioFormat instance, or `nil` if the initialization fails.
Discussion ¶
When `layout` is `nil`, and `asbd` specifies one or two channels, this method assumes mono or stereo layout, respectively.
If the AudioStreamBasicDescription specifies more than two channels and `layout` is `nil`, this method fails and returns `nil`.
func (AVAudioFormat) AVChannelLayoutKey ¶
func (a AVAudioFormat) AVChannelLayoutKey() string
See: https://developer.apple.com/documentation/avfaudio/avchannellayoutkey
func (AVAudioFormat) Autorelease ¶
func (a AVAudioFormat) Autorelease() AVAudioFormat
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioFormat) ChannelCount ¶
func (a AVAudioFormat) ChannelCount() AVAudioChannelCount
The number of channels of audio data.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFormat/channelCount
func (AVAudioFormat) ChannelLayout ¶
func (a AVAudioFormat) ChannelLayout() IAVAudioChannelLayout
The underlying audio channel layout.
Discussion ¶
Only formats with more than two channels require channel layouts.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFormat/channelLayout
func (AVAudioFormat) CommonFormat ¶
func (a AVAudioFormat) CommonFormat() AVAudioCommonFormat
The common format identifier instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFormat/commonFormat
func (AVAudioFormat) EncodeWithCoder ¶
func (a AVAudioFormat) EncodeWithCoder(coder foundation.INSCoder)
func (AVAudioFormat) FormatDescription ¶
func (a AVAudioFormat) FormatDescription() coremedia.CMFormatDescriptionRef
The audio format description to use with Core Media APIs.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFormat/formatDescription
func (AVAudioFormat) Init ¶
func (a AVAudioFormat) Init() AVAudioFormat
Init initializes the instance.
func (AVAudioFormat) InitStandardFormatWithSampleRateChannelLayout ¶
func (a AVAudioFormat) InitStandardFormatWithSampleRateChannelLayout(sampleRate float64, layout IAVAudioChannelLayout) AVAudioFormat
Creates an audio format instance as a deinterleaved float with the specified sample rate and channel layout.
sampleRate: The sampling rate, in hertz.
layout: The channel layout, which must not be `nil`.
Return Value ¶
A new AVAudioFormat instance.
Discussion ¶
The returned AVAudioFormat instance uses the [AudioPCMFormatFloat32] format.
func (AVAudioFormat) InitStandardFormatWithSampleRateChannels ¶
func (a AVAudioFormat) InitStandardFormatWithSampleRateChannels(sampleRate float64, channels AVAudioChannelCount) AVAudioFormat
Creates an audio format instance with the specified sample rate and channel count.
sampleRate: The sampling rate, in hertz.
channels: The channel count.
Return Value ¶
A new AVAudioFormat instance, or `nil` if the initialization fails.
Discussion ¶
The returned AVAudioFormat instance uses the [AudioPCMFormatFloat32] format.
func (AVAudioFormat) InitWithCMAudioFormatDescription ¶
func (a AVAudioFormat) InitWithCMAudioFormatDescription(formatDescription coremedia.CMFormatDescriptionRef) AVAudioFormat
Creates an audio format instance from a Core Media audio format description.
formatDescription: The Core Media audio format description.
Return Value ¶
A new AVAudioFormat instance, or `nil` if `formatDescription` isn’t valid.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFormat/init(cmAudioFormatDescription:)
func (AVAudioFormat) InitWithCommonFormatSampleRateChannelsInterleaved ¶
func (a AVAudioFormat) InitWithCommonFormatSampleRateChannelsInterleaved(format AVAudioCommonFormat, sampleRate float64, channels AVAudioChannelCount, interleaved bool) AVAudioFormat
Creates an audio format instance.
format: The audio format.
sampleRate: The sampling rate, in hertz.
channels: The channel count.
interleaved: The Boolean value that indicates whether `format` is in an interleaved state.
Return Value ¶
A new AVAudioFormat instance, or `nil` if the initialization fails.
Discussion ¶
For information about possible `format` values, see AVAudioCommonFormat.
func (AVAudioFormat) InitWithCommonFormatSampleRateInterleavedChannelLayout ¶
func (a AVAudioFormat) InitWithCommonFormatSampleRateInterleavedChannelLayout(format AVAudioCommonFormat, sampleRate float64, interleaved bool, layout IAVAudioChannelLayout) AVAudioFormat
Creates an audio format instance with the specified audio format, sample rate, interleaved state, and channel layout.
format: The audio format.
sampleRate: The sampling rate, in hertz.
interleaved: The Boolean value that indicates whether `format` is in an interleaved state.
layout: The channel layout, which must not be `nil`.
Return Value ¶
A new AVAudioFormat instance.
Discussion ¶
For information about possible `format` values, see AVAudioCommonFormat.
func (AVAudioFormat) InitWithSettings ¶
func (a AVAudioFormat) InitWithSettings(settings foundation.INSDictionary) AVAudioFormat
Creates an audio format instance using the specified settings dictionary.
settings: The settings dictionary.
Return Value ¶
A new AVAudioFormat instance, or `nil` if the initialization fails.
Discussion ¶
Note that many settings dictionary elements aren’t relevant for the format, so this method ignores them. For information about supported dictionary values, see Audio settings.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFormat/init(settings:)
func (AVAudioFormat) InitWithStreamDescription ¶
func (a AVAudioFormat) InitWithStreamDescription(asbd objectivec.IObject) AVAudioFormat
Creates an audio format instance from a stream description.
asbd: The audio stream description.
Return Value ¶
A new AVAudioFormat instance, or `nil` if the initialization fails.
Discussion ¶
If the AudioStreamBasicDescription specifies more than two channels, this method fails and returns `nil`. Instead, use the [InitWithStreamDescriptionChannelLayout] method.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFormat/init(streamDescription:)
func (AVAudioFormat) InitWithStreamDescriptionChannelLayout ¶
func (a AVAudioFormat) InitWithStreamDescriptionChannelLayout(asbd objectivec.IObject, layout IAVAudioChannelLayout) AVAudioFormat
Creates an audio format instance from a stream description and channel layout.
asbd: The audio stream description.
layout: The channel layout.
Return Value ¶
A new AVAudioFormat instance, or `nil` if the initialization fails.
Discussion ¶
When `layout` is `nil`, and `asbd` specifies one or two channels, this method assumes mono or stereo layout, respectively.
If the AudioStreamBasicDescription specifies more than two channels and `layout` is `nil`, this method fails and returns `nil`.
func (AVAudioFormat) Interleaved ¶
func (a AVAudioFormat) Interleaved() bool
A Boolean value that indicates whether the samples mix into one stream.
Discussion ¶
This value is only valid for PCM formats.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFormat/isInterleaved
func (AVAudioFormat) MagicCookie ¶
func (a AVAudioFormat) MagicCookie() foundation.INSData
An object that contains metadata that encoders and decoders require.
Discussion ¶
Encoders produce a `magicCookie` object, and some decoders require it to decode properly.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFormat/magicCookie
func (AVAudioFormat) SampleRate ¶
func (a AVAudioFormat) SampleRate() float64
The audio format sampling rate, in hertz.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFormat/sampleRate
func (AVAudioFormat) SetMagicCookie ¶
func (a AVAudioFormat) SetMagicCookie(value foundation.INSData)
func (AVAudioFormat) Settings ¶
func (a AVAudioFormat) Settings() foundation.INSDictionary
A dictionary that represents the format as a dictionary using audio setting keys.
Discussion ¶
The settings dictionary doesn’t support all formats that AudioStreamBasicDescription represents (the underlying implementation), in which case, this property returns `nil`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFormat/settings
func (AVAudioFormat) Standard ¶
func (a AVAudioFormat) Standard() bool
A Boolean value that indicates whether the format is in a deinterleaved native-endian float state.
Discussion ¶
This value returns true if the format is [AudioPCMFormatFloat32]; otherwise, false.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFormat/isStandard
func (AVAudioFormat) StreamDescription ¶
func (a AVAudioFormat) StreamDescription() objectivec.IObject
The audio format properties of a stream of audio data.
Discussion ¶
Returns an AudioStreamBasicDescription that you use with lower-level audio APIs.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFormat/streamDescription
type AVAudioFormatClass ¶
type AVAudioFormatClass struct {
// contains filtered or unexported fields
}
func GetAVAudioFormatClass ¶
func GetAVAudioFormatClass() AVAudioFormatClass
GetAVAudioFormatClass returns the class object for AVAudioFormat.
func (AVAudioFormatClass) Alloc ¶
func (ac AVAudioFormatClass) Alloc() AVAudioFormat
Alloc allocates memory for a new instance of the class.
func (AVAudioFormatClass) Class ¶
func (ac AVAudioFormatClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioFrameCount ¶
type AVAudioFrameCount = uint32
AVAudioFrameCount is a number of audio sample frames.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFrameCount
type AVAudioFramePosition ¶
type AVAudioFramePosition = int64
AVAudioFramePosition is a position in an audio file or stream.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFramePosition
type AVAudioIONode ¶
type AVAudioIONode struct {
AVAudioNode
}
An object that performs audio input or output in the engine.
Overview ¶
When rendering to and from an audio device in macOS, AVAudioInputNode and AVAudioOutputNode communicate with the system’s default input and output devices. In iOS, they communicate with the devices appropriate to the app’s AVAudioSession category, configurations, and user actions, such as connecting or disconnecting external devices.
In the manual rendering mode, AVAudioInputNode and AVAudioOutputNode perform the input and output in the engine in response to the client’s request.
Getting the Audio Unit ¶
- AVAudioIONode.AudioUnit: The node’s underlying audio unit, if any.
Getting the I/O Latency ¶
- AVAudioIONode.PresentationLatency: The presentation or hardware latency, applicable when rendering to or from an audio device.
Getting and Setting the Voice Processing State ¶
- AVAudioIONode.SetVoiceProcessingEnabledError: Enables or disables voice processing on the I/O node.
- AVAudioIONode.VoiceProcessingEnabled: A Boolean value that indicates whether voice processing is in an enabled state.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioIONode
func AVAudioIONodeFromID ¶
func AVAudioIONodeFromID(id objc.ID) AVAudioIONode
AVAudioIONodeFromID constructs a AVAudioIONode from an objc.ID.
An object that performs audio input or output in the engine.
func NewAVAudioIONode ¶
func NewAVAudioIONode() AVAudioIONode
NewAVAudioIONode creates a new AVAudioIONode instance.
func (AVAudioIONode) AudioUnit ¶
func (a AVAudioIONode) AudioUnit() IAVAudioUnit
The node’s underlying audio unit, if any.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioIONode/audioUnit
func (AVAudioIONode) Autorelease ¶
func (a AVAudioIONode) Autorelease() AVAudioIONode
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioIONode) Init ¶
func (a AVAudioIONode) Init() AVAudioIONode
Init initializes the instance.
func (AVAudioIONode) PresentationLatency ¶
func (a AVAudioIONode) PresentationLatency() float64
The presentation or hardware latency, applicable when rendering to or from an audio device.
Discussion ¶
This corresponds to `kAudioDevicePropertyLatency` and `kAudioStreamPropertyLatency`. For more information, see `AudioHardwareBase.H()` in `CoreAudio.Framework`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioIONode/presentationLatency
func (AVAudioIONode) SetVoiceProcessingEnabledError ¶
func (a AVAudioIONode) SetVoiceProcessingEnabledError(enabled bool) (bool, error)
Enables or disables voice processing on the I/O node.
enabled: The Boolean value that indicates whether to enable voice processing.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioIONode/setVoiceProcessingEnabled(_:)
func (AVAudioIONode) VoiceProcessingEnabled ¶
func (a AVAudioIONode) VoiceProcessingEnabled() bool
A Boolean value that indicates whether voice processing is in an enabled state.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioIONode/isVoiceProcessingEnabled
type AVAudioIONodeClass ¶
type AVAudioIONodeClass struct {
// contains filtered or unexported fields
}
func GetAVAudioIONodeClass ¶
func GetAVAudioIONodeClass() AVAudioIONodeClass
GetAVAudioIONodeClass returns the class object for AVAudioIONode.
func (AVAudioIONodeClass) Alloc ¶
func (ac AVAudioIONodeClass) Alloc() AVAudioIONode
Alloc allocates memory for a new instance of the class.
func (AVAudioIONodeClass) Class ¶
func (ac AVAudioIONodeClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioIONodeInputBlock ¶
type AVAudioIONodeInputBlock = func(uint32) objectivec.IObject
AVAudioIONodeInputBlock is the type that represents a block to render operation calls to get input data when in manual rendering mode.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioIONodeInputBlock
type AVAudioInputNode ¶
type AVAudioInputNode struct {
AVAudioIONode
}
An object that connects to the system’s audio input.
Overview ¶
This node connects to the system’s audio input when rendering to or from an audio device. In manual rendering mode, this node supplies input data to the engine.
This audio node has one element. The format of the input scope reflects:
- The audio hardware sample rate and channel count when it connects to hardware. - The format of the PCM audio data that the node supplies to the engine in manual rendering mode. For more information, see AVAudioInputNode.SetManualRenderingInputPCMFormatInputBlock
When rendering from an audio device, the input node doesn’t support format conversion. In this case, the format of the output scope must be the same as the input and the formats for all nodes connected to the input chain.
In manual rendering mode, the format of the output scope is initially the same as the input, but you may set it to a different format, which converts the node.
Manually Giving Data to an Audio Engine ¶
- AVAudioInputNode.SetManualRenderingInputPCMFormatInputBlock: Supplies the data through the input node to the engine while operating in the manual rendering mode.
Getting and Setting Voice Processing Properties ¶
- AVAudioInputNode.VoiceProcessingInputMuted: A Boolean that indicates whether the input of the voice processing unit is in a muted state.
- AVAudioInputNode.SetVoiceProcessingInputMuted
- AVAudioInputNode.VoiceProcessingBypassed: A Boolean that indicates whether the node bypasses all microphone uplink processing of the voice-processing unit.
- AVAudioInputNode.SetVoiceProcessingBypassed
- AVAudioInputNode.VoiceProcessingAGCEnabled: A Boolean that indicates whether automatic gain control on the processed microphone uplink signal is active.
- AVAudioInputNode.SetVoiceProcessingAGCEnabled
- AVAudioInputNode.VoiceProcessingOtherAudioDuckingConfiguration: The ducking configuration of nonvoice audio.
- AVAudioInputNode.SetVoiceProcessingOtherAudioDuckingConfiguration
Handling Muted Speech Events ¶
See: https://developer.apple.com/documentation/AVFAudio/AVAudioInputNode
func AVAudioInputNodeFromID ¶
func AVAudioInputNodeFromID(id objc.ID) AVAudioInputNode
AVAudioInputNodeFromID constructs a AVAudioInputNode from an objc.ID.
An object that connects to the system’s audio input.
func NewAVAudioInputNode ¶
func NewAVAudioInputNode() AVAudioInputNode
NewAVAudioInputNode creates a new AVAudioInputNode instance.
func (AVAudioInputNode) Autorelease ¶
func (a AVAudioInputNode) Autorelease() AVAudioInputNode
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioInputNode) DestinationForMixerBus ¶
func (a AVAudioInputNode) DestinationForMixerBus(mixer IAVAudioNode, bus AVAudioNodeBus) IAVAudioMixingDestination
Gets the audio mixing destination object that corresponds to the specified mixer node and input bus.
mixer: The mixer to get destination details for.
bus: The input bus.
Return Value ¶
Returns `self` if the specified mixer or input bus matches its connection point. If the mixer or input bus doesn’t match its connection point, or if the source node isn’t in a connected state to the mixer or input bus, the method returns `nil.`
Discussion ¶
When you connect a source node to multiple mixers downstream, setting AVAudioMixing properties directly on the source node applies the change to all of them. Use this method to get the corresponding AVAudioMixingDestination for a specific mixer. Properties set on individual destination instances don’t reflect at the source node level.
If there’s any disconnection between the source and mixer nodes, the return value can be invalid. Fetch the return value every time you want to set or get properties on a specific mixer.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing/destination(forMixer:bus:)
func (AVAudioInputNode) Init ¶
func (a AVAudioInputNode) Init() AVAudioInputNode
Init initializes the instance.
func (AVAudioInputNode) Obstruction ¶
func (a AVAudioInputNode) Obstruction() float32
A value that simulates filtering of the direct path of sound due to an obstacle.
Discussion ¶
The value of `obstruction` is in decibels. The system blocks only the direct path of sound between the source and listener.
The default value is `0.0`, and the range of valid values is `-100` to `0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/obstruction
func (AVAudioInputNode) Occlusion ¶
func (a AVAudioInputNode) Occlusion() float32
A value that simulates filtering of the direct and reverb paths of sound due to an obstacle.
Discussion ¶
The value of `obstruction` is in decibels. The system blocks the direct and reverb paths of sound between the source and listener.
The default value is `0.0`, and the range of valid values is `-100` to `0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/occlusion
func (AVAudioInputNode) Pan ¶
func (a AVAudioInputNode) Pan() float32
The bus’s stereo pan.
Discussion ¶
The default value is `0.0`, and the range of valid values is `-1.0` to `1.0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioStereoMixing/pan
func (AVAudioInputNode) PointSourceInHeadMode ¶
func (a AVAudioInputNode) PointSourceInHeadMode() AVAudio3DMixingPointSourceInHeadMode
The in-head mode for a point source.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/pointSourceInHeadMode
func (AVAudioInputNode) Position ¶
func (a AVAudioInputNode) Position() AVAudio3DPoint
The location of the source in the 3D environment.
Discussion ¶
The system specifies the coordinates in meters. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/position
func (AVAudioInputNode) Rate ¶
func (a AVAudioInputNode) Rate() float32
A value that changes the playback rate of the input signal.
Discussion ¶
A value of `2.0` results in the output audio playing one octave higher. A value of `0.5` results in the output audio playing one octave lower.
The default value is `1.0`, and the range of valid values is `0.5` to `2.0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/rate
func (AVAudioInputNode) RenderingAlgorithm ¶
func (a AVAudioInputNode) RenderingAlgorithm() AVAudio3DMixingRenderingAlgorithm
The type of rendering algorithm the mixer uses.
Discussion ¶
Depending on the current output format of the AVAudioEnvironmentNode instance, the system may only support a subset of the rendering algorithms. You can retrieve an array of valid rendering algorithms by calling the [ApplicableRenderingAlgorithms] function of the AVAudioEnvironmentNode instance.
The default rendering algorithm is [Audio3DMixingRenderingAlgorithmEqualPowerPanning]. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/renderingAlgorithm
func (AVAudioInputNode) ReverbBlend ¶
func (a AVAudioInputNode) ReverbBlend() float32
A value that controls the blend of dry and reverb processed audio.
Discussion ¶
This property controls the amount of the source’s audio that the AVAudioEnvironmentNode instance processes. A value of `0.5` results in an equal blend of dry and processed (wet) audio.
The default is `0.0`, and the range of valid values is `0.0` (completely dry) to `1.0` (completely wet). Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/reverbBlend
func (AVAudioInputNode) SetManualRenderingInputPCMFormatInputBlock ¶
func (a AVAudioInputNode) SetManualRenderingInputPCMFormatInputBlock(format IAVAudioFormat, block AVAudioIONodeInputBlock) bool
Supplies the data through the input node to the engine while operating in the manual rendering mode.
format: The format of the PCM audio data the block supplies to the engine.
block: The block the engine calls on the input node to get the audio to send to the output when operating in the manual rendering mode. For more information, see AVAudioIONodeInputBlock.
Discussion ¶
The block must be non-`nil` when using an input node while the engine is operating in manual rendering mode. If you switch the engine to render to and from an audio device, it invalidates any previous block.
func (AVAudioInputNode) SetMutedSpeechActivityEventListener ¶
func (a AVAudioInputNode) SetMutedSpeechActivityEventListener(listenerBlock AVAudioVoiceProcessingSpeechActivityEventHandler) bool
func (AVAudioInputNode) SetMutedSpeechActivityEventListenerSync ¶
func (a AVAudioInputNode) SetMutedSpeechActivityEventListenerSync(ctx context.Context) (AVAudioVoiceProcessingSpeechActivityEvent, error)
SetMutedSpeechActivityEventListenerSync is a synchronous wrapper around AVAudioInputNode.SetMutedSpeechActivityEventListener. It blocks until the completion handler fires or the context is cancelled.
func (AVAudioInputNode) SetObstruction ¶
func (a AVAudioInputNode) SetObstruction(value float32)
func (AVAudioInputNode) SetOcclusion ¶
func (a AVAudioInputNode) SetOcclusion(value float32)
func (AVAudioInputNode) SetPan ¶
func (a AVAudioInputNode) SetPan(value float32)
func (AVAudioInputNode) SetPointSourceInHeadMode ¶
func (a AVAudioInputNode) SetPointSourceInHeadMode(value AVAudio3DMixingPointSourceInHeadMode)
func (AVAudioInputNode) SetPosition ¶
func (a AVAudioInputNode) SetPosition(value AVAudio3DPoint)
func (AVAudioInputNode) SetRate ¶
func (a AVAudioInputNode) SetRate(value float32)
func (AVAudioInputNode) SetRenderingAlgorithm ¶
func (a AVAudioInputNode) SetRenderingAlgorithm(value AVAudio3DMixingRenderingAlgorithm)
func (AVAudioInputNode) SetReverbBlend ¶
func (a AVAudioInputNode) SetReverbBlend(value float32)
func (AVAudioInputNode) SetSourceMode ¶
func (a AVAudioInputNode) SetSourceMode(value AVAudio3DMixingSourceMode)
func (AVAudioInputNode) SetVoiceProcessingAGCEnabled ¶
func (a AVAudioInputNode) SetVoiceProcessingAGCEnabled(value bool)
func (AVAudioInputNode) SetVoiceProcessingBypassed ¶
func (a AVAudioInputNode) SetVoiceProcessingBypassed(value bool)
func (AVAudioInputNode) SetVoiceProcessingInputMuted ¶
func (a AVAudioInputNode) SetVoiceProcessingInputMuted(value bool)
func (AVAudioInputNode) SetVoiceProcessingOtherAudioDuckingConfiguration ¶
func (a AVAudioInputNode) SetVoiceProcessingOtherAudioDuckingConfiguration(value AVAudioVoiceProcessingOtherAudioDuckingConfiguration)
func (AVAudioInputNode) SetVolume ¶
func (a AVAudioInputNode) SetVolume(value float32)
func (AVAudioInputNode) SourceMode ¶
func (a AVAudioInputNode) SourceMode() AVAudio3DMixingSourceMode
The source mode for the input bus of the audio environment node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/sourceMode
func (AVAudioInputNode) VoiceProcessingAGCEnabled ¶
func (a AVAudioInputNode) VoiceProcessingAGCEnabled() bool
A Boolean that indicates whether automatic gain control on the processed microphone uplink signal is active.
Discussion ¶
This property is in an enabled state by default.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioInputNode/isVoiceProcessingAGCEnabled
func (AVAudioInputNode) VoiceProcessingBypassed ¶
func (a AVAudioInputNode) VoiceProcessingBypassed() bool
A Boolean that indicates whether the node bypasses all microphone uplink processing of the voice-processing unit.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioInputNode/isVoiceProcessingBypassed
func (AVAudioInputNode) VoiceProcessingInputMuted ¶
func (a AVAudioInputNode) VoiceProcessingInputMuted() bool
A Boolean that indicates whether the input of the voice processing unit is in a muted state.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioInputNode/isVoiceProcessingInputMuted
func (AVAudioInputNode) VoiceProcessingOtherAudioDuckingConfiguration ¶
func (a AVAudioInputNode) VoiceProcessingOtherAudioDuckingConfiguration() AVAudioVoiceProcessingOtherAudioDuckingConfiguration
The ducking configuration of nonvoice audio.
Discussion ¶
Use this property to configures the ducking of nonvoice audio, including advanced enablement and ducking level. Typically, when playing other audio during voice chat, applying a higher level of ducking can increase the intelligibility of the voice chat.
If not set, the default behavior is to disable advanced ducking, with a ducking level set to [AudioVoiceProcessingOtherAudioDuckingLevelDefault].
func (AVAudioInputNode) Volume ¶
func (a AVAudioInputNode) Volume() float32
The bus’s input volume.
Discussion ¶
The default value is `1.0`, and the range of valid values is `0.0` to `1.0`. Only the AVAudioEnvironmentNode and the AVAudioMixerNode implement this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing/volume
type AVAudioInputNodeClass ¶
type AVAudioInputNodeClass struct {
// contains filtered or unexported fields
}
func GetAVAudioInputNodeClass ¶
func GetAVAudioInputNodeClass() AVAudioInputNodeClass
GetAVAudioInputNodeClass returns the class object for AVAudioInputNode.
func (AVAudioInputNodeClass) Alloc ¶
func (ac AVAudioInputNodeClass) Alloc() AVAudioInputNode
Alloc allocates memory for a new instance of the class.
func (AVAudioInputNodeClass) Class ¶
func (ac AVAudioInputNodeClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioMixerNode ¶
type AVAudioMixerNode struct {
AVAudioNode
}
An object that takes any number of inputs and converts them into a single output.
Overview ¶
The mixer accepts input at any sample rate and efficiently combines sample rate conversions. It also accepts any channel count and correctly upmixes or downmixes to the output channel count.
Getting and Setting the Mixer Volume ¶
- AVAudioMixerNode.OutputVolume: The mixer’s output volume.
- AVAudioMixerNode.SetOutputVolume
Getting an Input Bus ¶
- AVAudioMixerNode.NextAvailableInputBus: An audio bus that isn’t in a connected state.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixerNode
func AVAudioMixerNodeFromID ¶
func AVAudioMixerNodeFromID(id objc.ID) AVAudioMixerNode
AVAudioMixerNodeFromID constructs a AVAudioMixerNode from an objc.ID.
An object that takes any number of inputs and converts them into a single output.
func NewAVAudioMixerNode ¶
func NewAVAudioMixerNode() AVAudioMixerNode
NewAVAudioMixerNode creates a new AVAudioMixerNode instance.
func (AVAudioMixerNode) Autorelease ¶
func (a AVAudioMixerNode) Autorelease() AVAudioMixerNode
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioMixerNode) DestinationForMixerBus ¶
func (a AVAudioMixerNode) DestinationForMixerBus(mixer IAVAudioNode, bus AVAudioNodeBus) IAVAudioMixingDestination
Gets the audio mixing destination object that corresponds to the specified mixer node and input bus.
mixer: The mixer to get destination details for.
bus: The input bus.
Return Value ¶
Returns `self` if the specified mixer or input bus matches its connection point. If the mixer or input bus doesn’t match its connection point, or if the source node isn’t in a connected state to the mixer or input bus, the method returns `nil.`
Discussion ¶
When you connect a source node to multiple mixers downstream, setting AVAudioMixing properties directly on the source node applies the change to all of them. Use this method to get the corresponding AVAudioMixingDestination for a specific mixer. Properties set on individual destination instances don’t reflect at the source node level.
If there’s any disconnection between the source and mixer nodes, the return value can be invalid. Fetch the return value every time you want to set or get properties on a specific mixer.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing/destination(forMixer:bus:)
func (AVAudioMixerNode) Init ¶
func (a AVAudioMixerNode) Init() AVAudioMixerNode
Init initializes the instance.
func (AVAudioMixerNode) NextAvailableInputBus ¶
func (a AVAudioMixerNode) NextAvailableInputBus() AVAudioNodeBus
An audio bus that isn’t in a connected state.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixerNode/nextAvailableInputBus
func (AVAudioMixerNode) Obstruction ¶
func (a AVAudioMixerNode) Obstruction() float32
A value that simulates filtering of the direct path of sound due to an obstacle.
Discussion ¶
The value of `obstruction` is in decibels. The system blocks only the direct path of sound between the source and listener.
The default value is `0.0`, and the range of valid values is `-100` to `0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/obstruction
func (AVAudioMixerNode) Occlusion ¶
func (a AVAudioMixerNode) Occlusion() float32
A value that simulates filtering of the direct and reverb paths of sound due to an obstacle.
Discussion ¶
The value of `obstruction` is in decibels. The system blocks the direct and reverb paths of sound between the source and listener.
The default value is `0.0`, and the range of valid values is `-100` to `0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/occlusion
func (AVAudioMixerNode) OutputVolume ¶
func (a AVAudioMixerNode) OutputVolume() float32
The mixer’s output volume.
Discussion ¶
The range of valid values is `0.0` to `1.0`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixerNode/outputVolume
func (AVAudioMixerNode) Pan ¶
func (a AVAudioMixerNode) Pan() float32
The bus’s stereo pan.
Discussion ¶
The default value is `0.0`, and the range of valid values is `-1.0` to `1.0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioStereoMixing/pan
func (AVAudioMixerNode) PointSourceInHeadMode ¶
func (a AVAudioMixerNode) PointSourceInHeadMode() AVAudio3DMixingPointSourceInHeadMode
The in-head mode for a point source.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/pointSourceInHeadMode
func (AVAudioMixerNode) Position ¶
func (a AVAudioMixerNode) Position() AVAudio3DPoint
The location of the source in the 3D environment.
Discussion ¶
The system specifies the coordinates in meters. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/position
func (AVAudioMixerNode) Rate ¶
func (a AVAudioMixerNode) Rate() float32
A value that changes the playback rate of the input signal.
Discussion ¶
A value of `2.0` results in the output audio playing one octave higher. A value of `0.5` results in the output audio playing one octave lower.
The default value is `1.0`, and the range of valid values is `0.5` to `2.0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/rate
func (AVAudioMixerNode) RenderingAlgorithm ¶
func (a AVAudioMixerNode) RenderingAlgorithm() AVAudio3DMixingRenderingAlgorithm
The type of rendering algorithm the mixer uses.
Discussion ¶
Depending on the current output format of the AVAudioEnvironmentNode instance, the system may only support a subset of the rendering algorithms. You can retrieve an array of valid rendering algorithms by calling the [ApplicableRenderingAlgorithms] function of the AVAudioEnvironmentNode instance.
The default rendering algorithm is [Audio3DMixingRenderingAlgorithmEqualPowerPanning]. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/renderingAlgorithm
func (AVAudioMixerNode) ReverbBlend ¶
func (a AVAudioMixerNode) ReverbBlend() float32
A value that controls the blend of dry and reverb processed audio.
Discussion ¶
This property controls the amount of the source’s audio that the AVAudioEnvironmentNode instance processes. A value of `0.5` results in an equal blend of dry and processed (wet) audio.
The default is `0.0`, and the range of valid values is `0.0` (completely dry) to `1.0` (completely wet). Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/reverbBlend
func (AVAudioMixerNode) SetObstruction ¶
func (a AVAudioMixerNode) SetObstruction(value float32)
func (AVAudioMixerNode) SetOcclusion ¶
func (a AVAudioMixerNode) SetOcclusion(value float32)
func (AVAudioMixerNode) SetOutputVolume ¶
func (a AVAudioMixerNode) SetOutputVolume(value float32)
func (AVAudioMixerNode) SetPan ¶
func (a AVAudioMixerNode) SetPan(value float32)
func (AVAudioMixerNode) SetPointSourceInHeadMode ¶
func (a AVAudioMixerNode) SetPointSourceInHeadMode(value AVAudio3DMixingPointSourceInHeadMode)
func (AVAudioMixerNode) SetPosition ¶
func (a AVAudioMixerNode) SetPosition(value AVAudio3DPoint)
func (AVAudioMixerNode) SetRate ¶
func (a AVAudioMixerNode) SetRate(value float32)
func (AVAudioMixerNode) SetRenderingAlgorithm ¶
func (a AVAudioMixerNode) SetRenderingAlgorithm(value AVAudio3DMixingRenderingAlgorithm)
func (AVAudioMixerNode) SetReverbBlend ¶
func (a AVAudioMixerNode) SetReverbBlend(value float32)
func (AVAudioMixerNode) SetSourceMode ¶
func (a AVAudioMixerNode) SetSourceMode(value AVAudio3DMixingSourceMode)
func (AVAudioMixerNode) SetVolume ¶
func (a AVAudioMixerNode) SetVolume(value float32)
func (AVAudioMixerNode) SourceMode ¶
func (a AVAudioMixerNode) SourceMode() AVAudio3DMixingSourceMode
The source mode for the input bus of the audio environment node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/sourceMode
func (AVAudioMixerNode) Volume ¶
func (a AVAudioMixerNode) Volume() float32
The bus’s input volume.
Discussion ¶
The default value is `1.0`, and the range of valid values is `0.0` to `1.0`. Only the AVAudioEnvironmentNode and the AVAudioMixerNode implement this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing/volume
type AVAudioMixerNodeClass ¶
type AVAudioMixerNodeClass struct {
// contains filtered or unexported fields
}
func GetAVAudioMixerNodeClass ¶
func GetAVAudioMixerNodeClass() AVAudioMixerNodeClass
GetAVAudioMixerNodeClass returns the class object for AVAudioMixerNode.
func (AVAudioMixerNodeClass) Alloc ¶
func (ac AVAudioMixerNodeClass) Alloc() AVAudioMixerNode
Alloc allocates memory for a new instance of the class.
func (AVAudioMixerNodeClass) Class ¶
func (ac AVAudioMixerNodeClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioMixing ¶
type AVAudioMixing interface {
objectivec.IObject
AVAudio3DMixing
AVAudioStereoMixing
// Gets the audio mixing destination object that corresponds to the specified mixer node and input bus.
//
// See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing/destination(forMixer:bus:)
DestinationForMixerBus(mixer IAVAudioNode, bus AVAudioNodeBus) IAVAudioMixingDestination
// The bus’s input volume.
//
// See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing/volume
Volume() float32
// The bus’s input volume.
//
// See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing/volume
SetVolume(value float32)
}
A collection of properties that are applicable to the input bus of a mixer node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing
type AVAudioMixingDestination ¶
type AVAudioMixingDestination struct {
objectivec.Object
}
An object that represents a connection to a mixer node from a node that conforms to the audio mixing protocol.
Overview ¶
You can only use a destination instance when a source node provides it. You can’t use it as a standalone instance.
Getting Mixing Destination Properties ¶
- AVAudioMixingDestination.ConnectionPoint: The underlying mixer connection point.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixingDestination
func AVAudioMixingDestinationFromID ¶
func AVAudioMixingDestinationFromID(id objc.ID) AVAudioMixingDestination
AVAudioMixingDestinationFromID constructs a AVAudioMixingDestination from an objc.ID.
An object that represents a connection to a mixer node from a node that conforms to the audio mixing protocol.
func NewAVAudioMixingDestination ¶
func NewAVAudioMixingDestination() AVAudioMixingDestination
NewAVAudioMixingDestination creates a new AVAudioMixingDestination instance.
func (AVAudioMixingDestination) Autorelease ¶
func (a AVAudioMixingDestination) Autorelease() AVAudioMixingDestination
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioMixingDestination) ConnectionPoint ¶
func (a AVAudioMixingDestination) ConnectionPoint() IAVAudioConnectionPoint
The underlying mixer connection point.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixingDestination/connectionPoint
func (AVAudioMixingDestination) DestinationForMixerBus ¶
func (a AVAudioMixingDestination) DestinationForMixerBus(mixer IAVAudioNode, bus AVAudioNodeBus) IAVAudioMixingDestination
Gets the audio mixing destination object that corresponds to the specified mixer node and input bus.
mixer: The mixer to get destination details for.
bus: The input bus.
Return Value ¶
Returns `self` if the specified mixer or input bus matches its connection point. If the mixer or input bus doesn’t match its connection point, or if the source node isn’t in a connected state to the mixer or input bus, the method returns `nil.`
Discussion ¶
When you connect a source node to multiple mixers downstream, setting AVAudioMixing properties directly on the source node applies the change to all of them. Use this method to get the corresponding AVAudioMixingDestination for a specific mixer. Properties set on individual destination instances don’t reflect at the source node level.
If there’s any disconnection between the source and mixer nodes, the return value can be invalid. Fetch the return value every time you want to set or get properties on a specific mixer.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing/destination(forMixer:bus:)
func (AVAudioMixingDestination) Init ¶
func (a AVAudioMixingDestination) Init() AVAudioMixingDestination
Init initializes the instance.
func (AVAudioMixingDestination) Obstruction ¶
func (a AVAudioMixingDestination) Obstruction() float32
A value that simulates filtering of the direct path of sound due to an obstacle.
Discussion ¶
The value of `obstruction` is in decibels. The system blocks only the direct path of sound between the source and listener.
The default value is `0.0`, and the range of valid values is `-100` to `0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/obstruction
func (AVAudioMixingDestination) Occlusion ¶
func (a AVAudioMixingDestination) Occlusion() float32
A value that simulates filtering of the direct and reverb paths of sound due to an obstacle.
Discussion ¶
The value of `obstruction` is in decibels. The system blocks the direct and reverb paths of sound between the source and listener.
The default value is `0.0`, and the range of valid values is `-100` to `0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/occlusion
func (AVAudioMixingDestination) Pan ¶
func (a AVAudioMixingDestination) Pan() float32
The bus’s stereo pan.
Discussion ¶
The default value is `0.0`, and the range of valid values is `-1.0` to `1.0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioStereoMixing/pan
func (AVAudioMixingDestination) PointSourceInHeadMode ¶
func (a AVAudioMixingDestination) PointSourceInHeadMode() AVAudio3DMixingPointSourceInHeadMode
The in-head mode for a point source.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/pointSourceInHeadMode
func (AVAudioMixingDestination) Position ¶
func (a AVAudioMixingDestination) Position() AVAudio3DPoint
The location of the source in the 3D environment.
Discussion ¶
The system specifies the coordinates in meters. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/position
func (AVAudioMixingDestination) Rate ¶
func (a AVAudioMixingDestination) Rate() float32
A value that changes the playback rate of the input signal.
Discussion ¶
A value of `2.0` results in the output audio playing one octave higher. A value of `0.5` results in the output audio playing one octave lower.
The default value is `1.0`, and the range of valid values is `0.5` to `2.0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/rate
func (AVAudioMixingDestination) RenderingAlgorithm ¶
func (a AVAudioMixingDestination) RenderingAlgorithm() AVAudio3DMixingRenderingAlgorithm
The type of rendering algorithm the mixer uses.
Discussion ¶
Depending on the current output format of the AVAudioEnvironmentNode instance, the system may only support a subset of the rendering algorithms. You can retrieve an array of valid rendering algorithms by calling the [ApplicableRenderingAlgorithms] function of the AVAudioEnvironmentNode instance.
The default rendering algorithm is [Audio3DMixingRenderingAlgorithmEqualPowerPanning]. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/renderingAlgorithm
func (AVAudioMixingDestination) ReverbBlend ¶
func (a AVAudioMixingDestination) ReverbBlend() float32
A value that controls the blend of dry and reverb processed audio.
Discussion ¶
This property controls the amount of the source’s audio that the AVAudioEnvironmentNode instance processes. A value of `0.5` results in an equal blend of dry and processed (wet) audio.
The default is `0.0`, and the range of valid values is `0.0` (completely dry) to `1.0` (completely wet). Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/reverbBlend
func (AVAudioMixingDestination) SetObstruction ¶
func (a AVAudioMixingDestination) SetObstruction(value float32)
func (AVAudioMixingDestination) SetOcclusion ¶
func (a AVAudioMixingDestination) SetOcclusion(value float32)
func (AVAudioMixingDestination) SetPan ¶
func (a AVAudioMixingDestination) SetPan(value float32)
func (AVAudioMixingDestination) SetPointSourceInHeadMode ¶
func (a AVAudioMixingDestination) SetPointSourceInHeadMode(value AVAudio3DMixingPointSourceInHeadMode)
func (AVAudioMixingDestination) SetPosition ¶
func (a AVAudioMixingDestination) SetPosition(value AVAudio3DPoint)
func (AVAudioMixingDestination) SetRate ¶
func (a AVAudioMixingDestination) SetRate(value float32)
func (AVAudioMixingDestination) SetRenderingAlgorithm ¶
func (a AVAudioMixingDestination) SetRenderingAlgorithm(value AVAudio3DMixingRenderingAlgorithm)
func (AVAudioMixingDestination) SetReverbBlend ¶
func (a AVAudioMixingDestination) SetReverbBlend(value float32)
func (AVAudioMixingDestination) SetSourceMode ¶
func (a AVAudioMixingDestination) SetSourceMode(value AVAudio3DMixingSourceMode)
func (AVAudioMixingDestination) SetVolume ¶
func (a AVAudioMixingDestination) SetVolume(value float32)
func (AVAudioMixingDestination) SourceMode ¶
func (a AVAudioMixingDestination) SourceMode() AVAudio3DMixingSourceMode
The source mode for the input bus of the audio environment node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/sourceMode
func (AVAudioMixingDestination) Volume ¶
func (a AVAudioMixingDestination) Volume() float32
The bus’s input volume.
Discussion ¶
The default value is `1.0`, and the range of valid values is `0.0` to `1.0`. Only the AVAudioEnvironmentNode and the AVAudioMixerNode implement this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing/volume
type AVAudioMixingDestinationClass ¶
type AVAudioMixingDestinationClass struct {
// contains filtered or unexported fields
}
func GetAVAudioMixingDestinationClass ¶
func GetAVAudioMixingDestinationClass() AVAudioMixingDestinationClass
GetAVAudioMixingDestinationClass returns the class object for AVAudioMixingDestination.
func (AVAudioMixingDestinationClass) Alloc ¶
func (ac AVAudioMixingDestinationClass) Alloc() AVAudioMixingDestination
Alloc allocates memory for a new instance of the class.
func (AVAudioMixingDestinationClass) Class ¶
func (ac AVAudioMixingDestinationClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioMixingObject ¶
type AVAudioMixingObject struct {
objectivec.Object
}
AVAudioMixingObject wraps an existing Objective-C object that conforms to the AVAudioMixing protocol.
func AVAudioMixingObjectFromID ¶
func AVAudioMixingObjectFromID(id objc.ID) AVAudioMixingObject
AVAudioMixingObjectFromID constructs a AVAudioMixingObject from an objc.ID. The object is determined to conform to the protocol at runtime.
func (AVAudioMixingObject) BaseObject ¶
func (o AVAudioMixingObject) BaseObject() objectivec.Object
func (AVAudioMixingObject) DestinationForMixerBus ¶
func (o AVAudioMixingObject) DestinationForMixerBus(mixer IAVAudioNode, bus AVAudioNodeBus) IAVAudioMixingDestination
Gets the audio mixing destination object that corresponds to the specified mixer node and input bus.
mixer: The mixer to get destination details for.
bus: The input bus.
Return Value ¶
Returns `self` if the specified mixer or input bus matches its connection point. If the mixer or input bus doesn’t match its connection point, or if the source node isn’t in a connected state to the mixer or input bus, the method returns `nil.`
Discussion ¶
When you connect a source node to multiple mixers downstream, setting AVAudioMixing properties directly on the source node applies the change to all of them. Use this method to get the corresponding AVAudioMixingDestination for a specific mixer. Properties set on individual destination instances don’t reflect at the source node level.
If there’s any disconnection between the source and mixer nodes, the return value can be invalid. Fetch the return value every time you want to set or get properties on a specific mixer.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing/destination(forMixer:bus:)
func (AVAudioMixingObject) Obstruction ¶
func (o AVAudioMixingObject) Obstruction() float32
A value that simulates filtering of the direct path of sound due to an obstacle.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/obstruction
func (AVAudioMixingObject) Occlusion ¶
func (o AVAudioMixingObject) Occlusion() float32
A value that simulates filtering of the direct and reverb paths of sound due to an obstacle.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/occlusion
func (AVAudioMixingObject) Pan ¶
func (o AVAudioMixingObject) Pan() float32
The bus’s stereo pan.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioStereoMixing/pan
func (AVAudioMixingObject) PointSourceInHeadMode ¶
func (o AVAudioMixingObject) PointSourceInHeadMode() AVAudio3DMixingPointSourceInHeadMode
The in-head mode for a point source.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/pointSourceInHeadMode
func (AVAudioMixingObject) Position ¶
func (o AVAudioMixingObject) Position() AVAudio3DPoint
The location of the source in the 3D environment.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/position
func (AVAudioMixingObject) Rate ¶
func (o AVAudioMixingObject) Rate() float32
A value that changes the playback rate of the input signal.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/rate
func (AVAudioMixingObject) RenderingAlgorithm ¶
func (o AVAudioMixingObject) RenderingAlgorithm() AVAudio3DMixingRenderingAlgorithm
The type of rendering algorithm the mixer uses.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/renderingAlgorithm
func (AVAudioMixingObject) ReverbBlend ¶
func (o AVAudioMixingObject) ReverbBlend() float32
A value that controls the blend of dry and reverb processed audio.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/reverbBlend
func (AVAudioMixingObject) SetObstruction ¶
func (o AVAudioMixingObject) SetObstruction(value float32)
func (AVAudioMixingObject) SetOcclusion ¶
func (o AVAudioMixingObject) SetOcclusion(value float32)
func (AVAudioMixingObject) SetPan ¶
func (o AVAudioMixingObject) SetPan(value float32)
func (AVAudioMixingObject) SetPointSourceInHeadMode ¶
func (o AVAudioMixingObject) SetPointSourceInHeadMode(value AVAudio3DMixingPointSourceInHeadMode)
func (AVAudioMixingObject) SetPosition ¶
func (o AVAudioMixingObject) SetPosition(value AVAudio3DPoint)
func (AVAudioMixingObject) SetRate ¶
func (o AVAudioMixingObject) SetRate(value float32)
func (AVAudioMixingObject) SetRenderingAlgorithm ¶
func (o AVAudioMixingObject) SetRenderingAlgorithm(value AVAudio3DMixingRenderingAlgorithm)
func (AVAudioMixingObject) SetReverbBlend ¶
func (o AVAudioMixingObject) SetReverbBlend(value float32)
func (AVAudioMixingObject) SetSourceMode ¶
func (o AVAudioMixingObject) SetSourceMode(value AVAudio3DMixingSourceMode)
func (AVAudioMixingObject) SetVolume ¶
func (o AVAudioMixingObject) SetVolume(value float32)
func (AVAudioMixingObject) SourceMode ¶
func (o AVAudioMixingObject) SourceMode() AVAudio3DMixingSourceMode
The source mode for the input bus of the audio environment node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/sourceMode
func (AVAudioMixingObject) Volume ¶
func (o AVAudioMixingObject) Volume() float32
The bus’s input volume.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing/volume
type AVAudioNode ¶
type AVAudioNode struct {
objectivec.Object
}
An object you use for audio generation, processing, or an I/O block.
Overview ¶
An AVAudioEngine object contains instances of audio nodes that you attach, and this base class provides common functionality. Instances of this class don’t provide useful functionality until you attach them to an engine.
Nodes have input and output busses that serve as connection points. For example, an effect has one input bus and one output bus, and a mixer has multiple input busses and one output bus.
A bus contains a format the framework expresses in terms of sample rate and channel count. Formats must match exactly when making connections between nodes, excluding AVAudioMixerNode and AVAudioOutputNode.
Configuring an Input Format Bus ¶
- AVAudioNode.InputFormatForBus: Gets the input format for the bus you specify.
- AVAudioNode.NameForInputBus: Gets the name of the input bus you specify.
- AVAudioNode.NumberOfInputs: The number of input busses for the node.
Creating an Output Format Bus ¶
- AVAudioNode.OutputFormatForBus: Retrieves the output format for the bus you specify.
- AVAudioNode.NameForOutputBus: Retrieves the name of the output bus you specify.
- AVAudioNode.NumberOfOutputs: The number of output busses for the node.
Installing and Removing an Audio Tap ¶
- AVAudioNode.InstallTapOnBusBufferSizeFormatBlock: Installs an audio tap on a bus you specify to record, monitor, and observe the output of the node.
- AVAudioNode.RemoveTapOnBus: Removes an audio tap on a bus you specify.
Getting the Audio Engine for the Node ¶
- AVAudioNode.Engine: The audio engine that manages the node, if any.
Getting the Latest Node Render Time ¶
- AVAudioNode.LastRenderTime: The most recent render time.
Getting Audio Node Properties ¶
- AVAudioNode.AUAudioUnit: An audio unit object that wraps or underlies the implementation’s audio unit.
- AVAudioNode.Latency: The processing latency of the node, in seconds.
- AVAudioNode.OutputPresentationLatency: The maximum render pipeline latency downstream of the node, in seconds.
Resetting the Audio Node ¶
- AVAudioNode.Reset: Clears a unit’s previous processing state.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioNode
func AVAudioNodeFromID ¶
func AVAudioNodeFromID(id objc.ID) AVAudioNode
AVAudioNodeFromID constructs a AVAudioNode from an objc.ID.
An object you use for audio generation, processing, or an I/O block.
func NewAVAudioNode ¶
func NewAVAudioNode() AVAudioNode
NewAVAudioNode creates a new AVAudioNode instance.
func (AVAudioNode) AUAudioUnit ¶
func (a AVAudioNode) AUAudioUnit() objectivec.IObject
An audio unit object that wraps or underlies the implementation’s audio unit.
Discussion ¶
This provides an AUAudioUnit that either wraps or underlies the implementation’s audio unit, depending on how the app packages the audio unit. Apps interact with this to control custom properties, select presets, and change parameters.
Don’t perform operations directly on the audio unit that may conflict with the engine’s state, which includes changing the initialization state, stream formats, channel layouts, or connections to other audio units.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioNode/auAudioUnit
func (AVAudioNode) Autorelease ¶
func (a AVAudioNode) Autorelease() AVAudioNode
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioNode) Engine ¶
func (a AVAudioNode) Engine() IAVAudioEngine
The audio engine that manages the node, if any.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioNode/engine
func (AVAudioNode) InputFormatForBus ¶
func (a AVAudioNode) InputFormatForBus(bus AVAudioNodeBus) IAVAudioFormat
Gets the input format for the bus you specify.
bus: An audio node bus.
Return Value ¶
An AVAudioFormat instance that represents the input format of the bus.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioNode/inputFormat(forBus:)
func (AVAudioNode) InstallTapOnBusBufferSizeFormatBlock ¶
func (a AVAudioNode) InstallTapOnBusBufferSizeFormatBlock(bus AVAudioNodeBus, bufferSize AVAudioFrameCount, format IAVAudioFormat, tapBlock AVAudioNodeTapBlock)
Installs an audio tap on a bus you specify to record, monitor, and observe the output of the node.
bus: The output bus to attach the tap to.
bufferSize: The size of the incoming buffers. The implementation may choose another size.
format: If non-`nil`, the framework applies this format to the output bus you specify. An error occurs when attaching to an output bus that’s already in a connected state. The tap and connection formats (if non-`nil`) on the bus need to be identical. Otherwise, the latter operation overrides the previous format.
For AVAudioOutputNode, you must specify the tap format as `nil`.
tapBlock: A block the framework calls with audio buffers.
Discussion ¶
You can install and remove taps while the engine is in a running state. You can install only one tap on any bus.
func (AVAudioNode) LastRenderTime ¶
func (a AVAudioNode) LastRenderTime() IAVAudioTime
The most recent render time.
Discussion ¶
This value is `nil` if the engine isn’t running or if you don’t connect the node to an input or output node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioNode/lastRenderTime
func (AVAudioNode) Latency ¶
func (a AVAudioNode) Latency() float64
The processing latency of the node, in seconds.
Discussion ¶
This latency reflects the delay due to signal processing. A value of `0` indicates either no latency or an unknown latency.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioNode/latency
func (AVAudioNode) NameForInputBus ¶
func (a AVAudioNode) NameForInputBus(bus AVAudioNodeBus) string
Gets the name of the input bus you specify.
bus: The input bus from an audio node.
Return Value ¶
The name of the input bus.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioNode/name(forInputBus:)
func (AVAudioNode) NameForOutputBus ¶
func (a AVAudioNode) NameForOutputBus(bus AVAudioNodeBus) string
Retrieves the name of the output bus you specify.
bus: The output bus from an audio node.
Return Value ¶
The name of the output bus.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioNode/name(forOutputBus:)
func (AVAudioNode) NumberOfInputs ¶
func (a AVAudioNode) NumberOfInputs() uint
The number of input busses for the node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioNode/numberOfInputs
func (AVAudioNode) NumberOfOutputs ¶
func (a AVAudioNode) NumberOfOutputs() uint
The number of output busses for the node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioNode/numberOfOutputs
func (AVAudioNode) OutputFormatForBus ¶
func (a AVAudioNode) OutputFormatForBus(bus AVAudioNodeBus) IAVAudioFormat
Retrieves the output format for the bus you specify.
bus: An audio node bus.
Return Value ¶
An AVAudioFormat instance that represents the output format of the bus.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioNode/outputFormat(forBus:)
func (AVAudioNode) OutputPresentationLatency ¶
func (a AVAudioNode) OutputPresentationLatency() float64
The maximum render pipeline latency downstream of the node, in seconds.
Discussion ¶
This latency describes the maximum time it takes to present the audio at the output of a node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioNode/outputPresentationLatency
func (AVAudioNode) RemoveTapOnBus ¶
func (a AVAudioNode) RemoveTapOnBus(bus AVAudioNodeBus)
Removes an audio tap on a bus you specify.
bus: The node output bus with the tap to remove.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioNode/removeTap(onBus:)
func (AVAudioNode) Reset ¶
func (a AVAudioNode) Reset()
Clears a unit’s previous processing state.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioNode/reset()
type AVAudioNodeBus ¶
type AVAudioNodeBus = uint
AVAudioNodeBus is the index of a bus on an audio node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioNodeBus
type AVAudioNodeClass ¶
type AVAudioNodeClass struct {
// contains filtered or unexported fields
}
func GetAVAudioNodeClass ¶
func GetAVAudioNodeClass() AVAudioNodeClass
GetAVAudioNodeClass returns the class object for AVAudioNode.
func (AVAudioNodeClass) Alloc ¶
func (ac AVAudioNodeClass) Alloc() AVAudioNode
Alloc allocates memory for a new instance of the class.
func (AVAudioNodeClass) Class ¶
func (ac AVAudioNodeClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioNodeCompletionHandler ¶
type AVAudioNodeCompletionHandler = func()
AVAudioNodeCompletionHandler is a general callback handler for an audio node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioNodeCompletionHandler
type AVAudioNodeTapBlock ¶
type AVAudioNodeTapBlock = func(AVAudioPCMBuffer, AVAudioTime)
AVAudioNodeTapBlock is the block that receives copies of the output of an audio node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioNodeTapBlock
type AVAudioOutputNode ¶
type AVAudioOutputNode struct {
AVAudioIONode
}
An object that connects to the system’s audio output.
Overview ¶
This node connects to the system’s audio output when rendering to or from an audio device. This node performs output in response to client’s requests when the engine is in manual rendering mode.
This audio node has one element. The format of the output scope reflects:
- The audio hardware sample rate and channel count when it connects to the hardware. - The engine’s manual rendering mode output format (see AVAudioOutputNode.ManualRenderingFormat).
The format of the input scope is initially the same as that of the output, but you may set it to a different format, in which case the audio node converts.
Configuring the Spatial Audio experience ¶
- AVAudioOutputNode.IntendedSpatialExperience: The intended spatial experience for this output node.
- AVAudioOutputNode.SetIntendedSpatialExperience
See: https://developer.apple.com/documentation/AVFAudio/AVAudioOutputNode
func AVAudioOutputNodeFromID ¶
func AVAudioOutputNodeFromID(id objc.ID) AVAudioOutputNode
AVAudioOutputNodeFromID constructs a AVAudioOutputNode from an objc.ID.
An object that connects to the system’s audio output.
func NewAVAudioOutputNode ¶
func NewAVAudioOutputNode() AVAudioOutputNode
NewAVAudioOutputNode creates a new AVAudioOutputNode instance.
func (AVAudioOutputNode) Autorelease ¶
func (a AVAudioOutputNode) Autorelease() AVAudioOutputNode
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioOutputNode) Init ¶
func (a AVAudioOutputNode) Init() AVAudioOutputNode
Init initializes the instance.
func (AVAudioOutputNode) IntendedSpatialExperience ¶
func (a AVAudioOutputNode) IntendedSpatialExperience() objectivec.IObject
The intended spatial experience for this output node.
See: https://developer.apple.com/documentation/avfaudio/avaudiooutputnode/intendedspatialexperience-3ts59
func (AVAudioOutputNode) ManualRenderingFormat ¶
func (a AVAudioOutputNode) ManualRenderingFormat() IAVAudioFormat
The render format of the engine in manual rendering mode.
See: https://developer.apple.com/documentation/avfaudio/avaudioengine/manualrenderingformat
func (AVAudioOutputNode) SetIntendedSpatialExperience ¶
func (a AVAudioOutputNode) SetIntendedSpatialExperience(value objectivec.IObject)
func (AVAudioOutputNode) SetManualRenderingFormat ¶
func (a AVAudioOutputNode) SetManualRenderingFormat(value IAVAudioFormat)
type AVAudioOutputNodeClass ¶
type AVAudioOutputNodeClass struct {
// contains filtered or unexported fields
}
func GetAVAudioOutputNodeClass ¶
func GetAVAudioOutputNodeClass() AVAudioOutputNodeClass
GetAVAudioOutputNodeClass returns the class object for AVAudioOutputNode.
func (AVAudioOutputNodeClass) Alloc ¶
func (ac AVAudioOutputNodeClass) Alloc() AVAudioOutputNode
Alloc allocates memory for a new instance of the class.
func (AVAudioOutputNodeClass) Class ¶
func (ac AVAudioOutputNodeClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioPCMBuffer ¶
type AVAudioPCMBuffer struct {
AVAudioBuffer
}
An object that represents an audio buffer you use with PCM audio formats.
Overview ¶
The PCM buffer class provides methods that are useful for manipulating buffers of audio in PCM format.
Creating a PCM Audio Buffer ¶
- AVAudioPCMBuffer.InitWithPCMFormatFrameCapacity: Creates a PCM audio buffer instance for PCM audio data.
- AVAudioPCMBuffer.InitWithPCMFormatBufferListNoCopyDeallocator: Creates a PCM audio buffer instance without copying samples, for PCM audio data, with a specified buffer list and a deallocator closure.
Getting and Setting the Frame Length ¶
- AVAudioPCMBuffer.FrameLength: The current number of valid sample frames in the buffer.
- AVAudioPCMBuffer.SetFrameLength
Accessing PCM Buffer Data ¶
- AVAudioPCMBuffer.FloatChannelData: The buffer’s audio samples as floating point values.
- AVAudioPCMBuffer.FrameCapacity: The buffer’s capacity, in audio sample frames.
- AVAudioPCMBuffer.Int16ChannelData: The buffer’s 16-bit integer audio samples.
- AVAudioPCMBuffer.Int32ChannelData: The buffer’s 32-bit integer audio samples.
- AVAudioPCMBuffer.Stride: The buffer’s number of interleaved channels.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPCMBuffer
func AVAudioPCMBufferFromID ¶
func AVAudioPCMBufferFromID(id objc.ID) AVAudioPCMBuffer
AVAudioPCMBufferFromID constructs a AVAudioPCMBuffer from an objc.ID.
An object that represents an audio buffer you use with PCM audio formats.
func NewAVAudioPCMBuffer ¶
func NewAVAudioPCMBuffer() AVAudioPCMBuffer
NewAVAudioPCMBuffer creates a new AVAudioPCMBuffer instance.
func NewAudioPCMBufferWithPCMFormatFrameCapacity ¶
func NewAudioPCMBufferWithPCMFormatFrameCapacity(format IAVAudioFormat, frameCapacity AVAudioFrameCount) AVAudioPCMBuffer
Creates a PCM audio buffer instance for PCM audio data.
format: The format of the PCM audio the buffer contains.
frameCapacity: The capacity of the buffer in PCM sample frames.
Return Value ¶
A new AVAudioPCMBuffer instance, or `nil` if it’s not possible.
Discussion ¶
The method returns `nil` due to the following reasons:
- The format has zero bytes per frame. - The system can’t represent the buffer byte capacity as an unsigned bit-32 integer.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPCMBuffer/init(pcmFormat:frameCapacity:)
func (AVAudioPCMBuffer) Autorelease ¶
func (a AVAudioPCMBuffer) Autorelease() AVAudioPCMBuffer
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioPCMBuffer) FloatChannelData ¶
func (a AVAudioPCMBuffer) FloatChannelData() unsafe.Pointer
The buffer’s audio samples as floating point values.
Discussion ¶
The `floatChannelData` property returns pointers to the buffer’s audio samples if the buffer’s format is 32-bit float. It returns `nil` if it’s another format.
The returned pointer is to `format.ChannelCount()` pointers to float. Each of these pointers is to [FrameLength] valid samples, which the class spaces by [Stride] samples.
If the format isn’t interleaved, as with the standard deinterleaved float format, the pointers point to separate chunks of memory, and the [Stride] property value is `1`.
When the format is in an interleaved state, the pointers refer to the same buffer of interleaved samples, each offset by `1` frame, and the [Stride] property value is the number of interleaved channels.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPCMBuffer/floatChannelData
func (AVAudioPCMBuffer) FrameCapacity ¶
func (a AVAudioPCMBuffer) FrameCapacity() AVAudioFrameCount
The buffer’s capacity, in audio sample frames.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPCMBuffer/frameCapacity
func (AVAudioPCMBuffer) FrameLength ¶
func (a AVAudioPCMBuffer) FrameLength() AVAudioFrameCount
The current number of valid sample frames in the buffer.
Discussion ¶
By default, the `frameLength` property doesn’t have a useful value upon creation, so you must set this property before using the buffer. The length must be less than or equal to the [FrameCapacity] of the buffer. For deinterleaved formats, [FrameCapacity] refers to the size of one channel’s worth of audio samples.
You may modify the length of the buffer as part of an operation that modifies its contents. Modifying `frameLength` updates the `mDataByteSize` field in each of the underlying AudioBufferList structure’s [AudioBuffer] properties correspondingly, and vice versa.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPCMBuffer/frameLength
func (AVAudioPCMBuffer) Init ¶
func (a AVAudioPCMBuffer) Init() AVAudioPCMBuffer
Init initializes the instance.
func (AVAudioPCMBuffer) InitWithPCMFormatBufferListNoCopyDeallocator ¶
func (a AVAudioPCMBuffer) InitWithPCMFormatBufferListNoCopyDeallocator(format IAVAudioFormat, bufferList objectivec.IObject, deallocator constAudioBufferListHandler) AVAudioPCMBuffer
Creates a PCM audio buffer instance without copying samples, for PCM audio data, with a specified buffer list and a deallocator closure.
format: The format of the PCM audio the buffer contains.
bufferList: The buffer list with the memory to contain the PCM audio data.
deallocator: The closure the method invokes when the resulting PCM buffer object deallocates.
Return Value ¶
A new AVAudioPCMBuffer instance, or `nil` if it’s not possible.
Discussion ¶
Use the deallocator parameter to define your own deallocation behavior for the audio buffer list’s underlying memory. The buffer list sent to the deallocator is identical to the one you specify, in term of buffer count and each buffer’s mData and mDataByteSize members.
The method returns `nil` due to the following reasons:
- The format has zero bytes per frame. - The buffer you specify has zero number of buffers. - The buffer list’s pointer to the buffer of audio data is in a `nil` state. - Each of the buffer’s data byte size aren’t equal, or if any of the buffers’ data byte size is zero. - There’s a mismatch between the format’s number of buffers and the buffer list’s size (1 if interleaved, mChannelsPerFrame if deinterleaved).
func (AVAudioPCMBuffer) InitWithPCMFormatBufferListNoCopyDeallocatorSync ¶
func (a AVAudioPCMBuffer) InitWithPCMFormatBufferListNoCopyDeallocatorSync(ctx context.Context, format IAVAudioFormat, bufferList objectivec.IObject) (*objectivec.Object, error)
InitWithPCMFormatBufferListNoCopyDeallocatorSync is a synchronous wrapper around AVAudioPCMBuffer.InitWithPCMFormatBufferListNoCopyDeallocator. It blocks until the completion handler fires or the context is cancelled.
func (AVAudioPCMBuffer) InitWithPCMFormatFrameCapacity ¶
func (a AVAudioPCMBuffer) InitWithPCMFormatFrameCapacity(format IAVAudioFormat, frameCapacity AVAudioFrameCount) AVAudioPCMBuffer
Creates a PCM audio buffer instance for PCM audio data.
format: The format of the PCM audio the buffer contains.
frameCapacity: The capacity of the buffer in PCM sample frames.
Return Value ¶
A new AVAudioPCMBuffer instance, or `nil` if it’s not possible.
Discussion ¶
The method returns `nil` due to the following reasons:
- The format has zero bytes per frame. - The system can’t represent the buffer byte capacity as an unsigned bit-32 integer.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPCMBuffer/init(pcmFormat:frameCapacity:)
func (AVAudioPCMBuffer) Int16ChannelData ¶
func (a AVAudioPCMBuffer) Int16ChannelData() unsafe.Pointer
The buffer’s 16-bit integer audio samples.
Discussion ¶
The `int16ChannelData` property returns the buffer’s audio samples if the buffer’s format has 2-byte integer samples, or `nil` if it’s another format. For more information, see [FloatChannelData].
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPCMBuffer/int16ChannelData
func (AVAudioPCMBuffer) Int32ChannelData ¶
func (a AVAudioPCMBuffer) Int32ChannelData() unsafe.Pointer
The buffer’s 32-bit integer audio samples.
Discussion ¶
The `int32ChannelData` property returns the buffer’s audio samples if the buffer’s format has 4-byte integer samples, or `nil` if it’s another format. For more information, see [FloatChannelData].
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPCMBuffer/int32ChannelData
func (AVAudioPCMBuffer) SetFrameLength ¶
func (a AVAudioPCMBuffer) SetFrameLength(value AVAudioFrameCount)
func (AVAudioPCMBuffer) Stride ¶
func (a AVAudioPCMBuffer) Stride() uint
The buffer’s number of interleaved channels.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPCMBuffer/stride
type AVAudioPCMBufferClass ¶
type AVAudioPCMBufferClass struct {
// contains filtered or unexported fields
}
func GetAVAudioPCMBufferClass ¶
func GetAVAudioPCMBufferClass() AVAudioPCMBufferClass
GetAVAudioPCMBufferClass returns the class object for AVAudioPCMBuffer.
func (AVAudioPCMBufferClass) Alloc ¶
func (ac AVAudioPCMBufferClass) Alloc() AVAudioPCMBuffer
Alloc allocates memory for a new instance of the class.
func (AVAudioPCMBufferClass) Class ¶
func (ac AVAudioPCMBufferClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioPacketCount ¶
type AVAudioPacketCount = uint32
AVAudioPacketCount is the number of packets of audio data.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPacketCount
type AVAudioPlayer ¶
type AVAudioPlayer struct {
objectivec.Object
}
An object that plays audio data from a file or buffer.
Overview ¶
Use an audio player to:
- Play audio of any duration from a file or buffer - Control the volume, panning, rate, and looping behavior of the played audio - Access playback-level metering data - Play multiple sounds simultaneously by synchronizing the playback of multiple players
For more information about preparing your app to play audio, see Configuring your app for media playback.
Creating an audio player ¶
- AVAudioPlayer.InitWithContentsOfURLError: Creates a player to play audio from a file.
- AVAudioPlayer.InitWithContentsOfURLFileTypeHintError: Creates a player to play audio from a file of a particular type.
- AVAudioPlayer.InitWithDataError: Creates a player to play in-memory audio data.
- AVAudioPlayer.InitWithDataFileTypeHintError: Creates a player to play in-memory audio data of a particular type.
Controlling playback ¶
- AVAudioPlayer.PrepareToPlay: Prepares the player for audio playback.
- AVAudioPlayer.Play: Plays audio asynchronously.
- AVAudioPlayer.PlayAtTime: Plays audio asynchronously, starting at a specified point in the audio output device’s timeline.
- AVAudioPlayer.Pause: Pauses audio playback.
- AVAudioPlayer.Stop: Stops playback and undoes the setup the system requires for playback.
- AVAudioPlayer.Playing: A Boolean value that indicates whether the player is currently playing audio.
Configuring playback settings ¶
- AVAudioPlayer.Volume: The audio player’s volume relative to other audio output.
- AVAudioPlayer.SetVolume
- AVAudioPlayer.SetVolumeFadeDuration: Changes the audio player’s volume over a duration of time.
- AVAudioPlayer.Pan: The audio player’s stereo pan position.
- AVAudioPlayer.SetPan
- AVAudioPlayer.EnableRate: A Boolean value that indicates whether you can adjust the playback rate of the audio player.
- AVAudioPlayer.SetEnableRate
- AVAudioPlayer.Rate: The audio player’s playback rate.
- AVAudioPlayer.SetRate
- AVAudioPlayer.NumberOfLoops: The number of times the audio repeats playback.
- AVAudioPlayer.SetNumberOfLoops
Accessing player timing ¶
- AVAudioPlayer.CurrentTime: The current playback time, in seconds, within the audio timeline.
- AVAudioPlayer.SetCurrentTime
- AVAudioPlayer.Duration: The total duration, in seconds, of the player’s audio.
Configuring the Spatial Audio experience ¶
- AVAudioPlayer.IntendedSpatialExperience: The intended spatial experience for this player.
- AVAudioPlayer.SetIntendedSpatialExperience
Managing audio channels ¶
- AVAudioPlayer.NumberOfChannels: The number of audio channels in the player’s audio.
Managing audio-level metering ¶
- AVAudioPlayer.MeteringEnabled: A Boolean value that indicates whether the player is able to generate audio-level metering data.
- AVAudioPlayer.SetMeteringEnabled
- AVAudioPlayer.UpdateMeters: Refreshes the average and peak power values for all channels of an audio player.
- AVAudioPlayer.AveragePowerForChannel: Returns the average power, in decibels full-scale (dBFS), for an audio channel.
- AVAudioPlayer.PeakPowerForChannel: Returns the peak power, in decibels full-scale (dBFS), for an audio channel.
Responding to player events ¶
- AVAudioPlayer.Delegate: The delegate object for the audio player.
- AVAudioPlayer.SetDelegate
Inspecting the audio data ¶
- AVAudioPlayer.Url: The URL of the audio file.
- AVAudioPlayer.Data: The audio data associated with the player.
- AVAudioPlayer.Format: The format of the player’s audio data.
- AVAudioPlayer.Settings: A dictionary that provides information about the player’s audio data.
Accessing device information ¶
- AVAudioPlayer.CurrentDevice: The unique identifier of the current audio player.
- AVAudioPlayer.SetCurrentDevice
- AVAudioPlayer.DeviceCurrentTime: The time value, in seconds, of the audio output device’s clock.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer
func AVAudioPlayerFromID ¶
func AVAudioPlayerFromID(id objc.ID) AVAudioPlayer
AVAudioPlayerFromID constructs a AVAudioPlayer from an objc.ID.
An object that plays audio data from a file or buffer.
func NewAVAudioPlayer ¶
func NewAVAudioPlayer() AVAudioPlayer
NewAVAudioPlayer creates a new AVAudioPlayer instance.
func NewAudioPlayerWithContentsOfURLError ¶
func NewAudioPlayerWithContentsOfURLError(url foundation.INSURL) (AVAudioPlayer, error)
Creates a player to play audio from a file.
url: A URL that identifies the local audio file to play.
Return Value ¶
A new audio player instance, or nil if an error occurs.
Discussion ¶
The audio data must be in a format that Core Audio supports.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/init(contentsOf:)
func NewAudioPlayerWithContentsOfURLFileTypeHintError ¶
func NewAudioPlayerWithContentsOfURLFileTypeHintError(url foundation.INSURL, utiString string) (AVAudioPlayer, error)
Creates a player to play audio from a file of a particular type.
url: A URL that identifies the local audio file to play.
utiString: The uniform type identifier (UTI) string of the file format.
Return Value ¶
A new audio player instance, or nil if there is an error.
Discussion ¶
The audio data must be in a format that Core Audio supports. Passing a file type hint helps the system parse the data if it can’t determine the file type or if the data is corrupt. See AVFileType for supported values.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/init(contentsOf:fileTypeHint:)
func NewAudioPlayerWithDataError ¶
func NewAudioPlayerWithDataError(data foundation.INSData) (AVAudioPlayer, error)
Creates a player to play in-memory audio data.
data: A buffer with the audio data to play.
Return Value ¶
A new audio player instance, or nil if an error occurs.
Discussion ¶
The audio data must be in a format that Core Audio supports.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/init(data:)
func NewAudioPlayerWithDataFileTypeHintError ¶
func NewAudioPlayerWithDataFileTypeHintError(data foundation.INSData, utiString string) (AVAudioPlayer, error)
Creates a player to play in-memory audio data of a particular type.
data: A buffer with the audio data to play.
utiString: The uniform type identifier (UTI) string of the file format.
Return Value ¶
A new audio player instance, or nil if an error occurs.
Discussion ¶
The audio data must be in a format that Core Audio supports. Passing a file type hint helps the system parse the data if it can’t determine the file type or if the data is corrupt. See AVFileType for supported values.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/init(data:fileTypeHint:)
func (AVAudioPlayer) Autorelease ¶
func (a AVAudioPlayer) Autorelease() AVAudioPlayer
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioPlayer) AveragePowerForChannel ¶
func (a AVAudioPlayer) AveragePowerForChannel(channelNumber uint) float32
Returns the average power, in decibels full-scale (dBFS), for an audio channel.
channelNumber: The audio channel with the average power value you want to retrieve. Channel numbers are zero-indexed. A monaural signal, or the left channel of a stereo signal, has channel number `0`.
Return Value ¶
A floating-point value, in dBFS, that indicates the audio channel’s current average power.
Discussion ¶
Before asking the player for its average power value, you must call [UpdateMeters] to generate the latest data. The returned value ranges from `–160` dBFS, indicating minimum power, to 0 dBFS, indicating maximum power.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/averagePower(forChannel:)
func (AVAudioPlayer) CurrentDevice ¶
func (a AVAudioPlayer) CurrentDevice() string
The unique identifier of the current audio player.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/currentDevice
func (AVAudioPlayer) CurrentTime ¶
func (a AVAudioPlayer) CurrentTime() float64
The current playback time, in seconds, within the audio timeline.
Discussion ¶
If the sound is playing, this property value is the offset, in seconds, from the start of the sound. If the sound isn’t playing, this property indicates the offset from where playback starts upon calling the [Play] method.
Use this property to seek to a specific time in the audio data or to implement audio fast-forward and rewind functions.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/currentTime
func (AVAudioPlayer) Data ¶
func (a AVAudioPlayer) Data() foundation.INSData
The audio data associated with the player.
Discussion ¶
This property is nil if you don’t create the player with a data buffer.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/data
func (AVAudioPlayer) Delegate ¶
func (a AVAudioPlayer) Delegate() AVAudioPlayerDelegate
The delegate object for the audio player.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/delegate
func (AVAudioPlayer) DeviceCurrentTime ¶
func (a AVAudioPlayer) DeviceCurrentTime() float64
The time value, in seconds, of the audio output device’s clock.
Discussion ¶
The value of this property increases monotonically while an audio player is playing or is in a paused state. If you connect more than one audio player to the audio output device, the time continues incrementing while at least one of the players is playing or is in a paused state. If the audio output device has no connected audio players that are either playing or are in a paused state, device time reverts to `0.0`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/deviceCurrentTime
func (AVAudioPlayer) Duration ¶
func (a AVAudioPlayer) Duration() float64
The total duration, in seconds, of the player’s audio.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/duration
func (AVAudioPlayer) EnableRate ¶
func (a AVAudioPlayer) EnableRate() bool
A Boolean value that indicates whether you can adjust the playback rate of the audio player.
Discussion ¶
To enable modifying the player’s rate, set this property to true after you create the player, but before you call [PrepareToPlay].
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/enableRate
func (AVAudioPlayer) Format ¶
func (a AVAudioPlayer) Format() IAVAudioFormat
The format of the player’s audio data.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/format
func (AVAudioPlayer) Init ¶
func (a AVAudioPlayer) Init() AVAudioPlayer
Init initializes the instance.
func (AVAudioPlayer) InitWithContentsOfURLError ¶
func (a AVAudioPlayer) InitWithContentsOfURLError(url foundation.INSURL) (AVAudioPlayer, error)
Creates a player to play audio from a file.
url: A URL that identifies the local audio file to play.
Return Value ¶
A new audio player instance, or nil if an error occurs.
Discussion ¶
The audio data must be in a format that Core Audio supports.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/init(contentsOf:)
func (AVAudioPlayer) InitWithContentsOfURLFileTypeHintError ¶
func (a AVAudioPlayer) InitWithContentsOfURLFileTypeHintError(url foundation.INSURL, utiString string) (AVAudioPlayer, error)
Creates a player to play audio from a file of a particular type.
url: A URL that identifies the local audio file to play.
utiString: The uniform type identifier (UTI) string of the file format.
Return Value ¶
A new audio player instance, or nil if there is an error.
Discussion ¶
The audio data must be in a format that Core Audio supports. Passing a file type hint helps the system parse the data if it can’t determine the file type or if the data is corrupt. See AVFileType for supported values.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/init(contentsOf:fileTypeHint:)
func (AVAudioPlayer) InitWithDataError ¶
func (a AVAudioPlayer) InitWithDataError(data foundation.INSData) (AVAudioPlayer, error)
Creates a player to play in-memory audio data.
data: A buffer with the audio data to play.
Return Value ¶
A new audio player instance, or nil if an error occurs.
Discussion ¶
The audio data must be in a format that Core Audio supports.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/init(data:)
func (AVAudioPlayer) InitWithDataFileTypeHintError ¶
func (a AVAudioPlayer) InitWithDataFileTypeHintError(data foundation.INSData, utiString string) (AVAudioPlayer, error)
Creates a player to play in-memory audio data of a particular type.
data: A buffer with the audio data to play.
utiString: The uniform type identifier (UTI) string of the file format.
Return Value ¶
A new audio player instance, or nil if an error occurs.
Discussion ¶
The audio data must be in a format that Core Audio supports. Passing a file type hint helps the system parse the data if it can’t determine the file type or if the data is corrupt. See AVFileType for supported values.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/init(data:fileTypeHint:)
func (AVAudioPlayer) IntendedSpatialExperience ¶
func (a AVAudioPlayer) IntendedSpatialExperience() objectivec.IObject
The intended spatial experience for this player.
See: https://developer.apple.com/documentation/avfaudio/avaudioplayer/intendedspatialexperience-27klj
func (AVAudioPlayer) MeteringEnabled ¶
func (a AVAudioPlayer) MeteringEnabled() bool
A Boolean value that indicates whether the player is able to generate audio-level metering data.
Discussion ¶
By default, the player doesn’t generate audio-level metering data. Because metering uses computing resources, enable it only if you intend to use it.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/isMeteringEnabled
func (AVAudioPlayer) NumberOfChannels ¶
func (a AVAudioPlayer) NumberOfChannels() uint
The number of audio channels in the player’s audio.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/numberOfChannels
func (AVAudioPlayer) NumberOfLoops ¶
func (a AVAudioPlayer) NumberOfLoops() int
The number of times the audio repeats playback.
Discussion ¶
The default value of `0` results in the sound playing once. Set a positive integer value to specify the number of times to repeat the sound. For example, a value of `1` plays the sound twice: the original sound and one repetition. Set a negative integer value to loop the sound continuously until you call the [Stop] method.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/numberOfLoops
func (AVAudioPlayer) Pan ¶
func (a AVAudioPlayer) Pan() float32
The audio player’s stereo pan position.
Discussion ¶
Set this property value to position the audio in the stereo field. Use a value of `-1.0` to indicate full left, `1.0` for full right, and `0.0` for center.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/pan
func (AVAudioPlayer) Pause ¶
func (a AVAudioPlayer) Pause()
Pauses audio playback.
Discussion ¶
Unlike calling [Stop], pausing playback doesn’t deallocate hardware resources. It leaves the audio ready to resume playback from where it stops.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/pause()
func (AVAudioPlayer) PeakPowerForChannel ¶
func (a AVAudioPlayer) PeakPowerForChannel(channelNumber uint) float32
Returns the peak power, in decibels full-scale (dBFS), for an audio channel.
channelNumber: The audio channel with the peak power value you want to obtain. Channel numbers are zero-indexed. A monaural signal, or the left channel of a stereo signal, has channel number `0`.
Return Value ¶
A floating-point value, in dBFS, that indicates the audio channel’s current peak power.
Discussion ¶
Before asking the player for its peak power value, you must call [UpdateMeters] to generate the latest data. The returned value ranges from `–160` dBFS, indicating minimum power, to 0 dBFS, indicating maximum power.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/peakPower(forChannel:)
func (AVAudioPlayer) Play ¶
func (a AVAudioPlayer) Play() bool
Plays audio asynchronously.
Return Value ¶
true if playback starts successfully; otherwise, false.
Discussion ¶
Calling this method implicitly calls [PrepareToPlay] if the audio player is unprepared for playback.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/play()
func (AVAudioPlayer) PlayAtTime ¶
func (a AVAudioPlayer) PlayAtTime(time float64) bool
Plays audio asynchronously, starting at a specified point in the audio output device’s timeline.
time: The audio device time to begin playback. This time must be later than the device’s current time.
Return Value ¶
true if playback starts successfully; otherwise, false.
Discussion ¶
Use this method to precisely synchronize the playback of two or more audio player objects as the following example shows:
Calling this method implicitly calls [PrepareToPlay] if the audio player is unprepared for playback.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/play(atTime:)
func (AVAudioPlayer) Playing ¶
func (a AVAudioPlayer) Playing() bool
A Boolean value that indicates whether the player is currently playing audio.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/isPlaying
func (AVAudioPlayer) PrepareToPlay ¶
func (a AVAudioPlayer) PrepareToPlay() bool
Prepares the player for audio playback.
Return Value ¶
true if the system successfully prepares the player; otherwise, false.
Discussion ¶
Calling this method preloads audio buffers and acquires the audio hardware necessary for playback. This method activates the audio session, so pass false to setActive:error: if immediate playback isn’t necessary. For example, when using the category option [AudioSessionCategoryOptionDuckOthers], this method lowers the audio outside of the app.
The system calls this method when using the method [Play], but calling it in advance minimizes the delay between calling `play()` and the start of sound output.
Calling [Stop], or allowing a sound to finish playing, undoes this setup.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/prepareToPlay()
func (AVAudioPlayer) Rate ¶
func (a AVAudioPlayer) Rate() float32
The audio player’s playback rate.
Discussion ¶
To set an audio player’s playback rate, you must first enable the rate adjustment by setting its [EnableRate] property to true.
The default value of this property is `1.0`, which indicates that audio playback occurs at standard speed. This property supports values in the range of `0.5` for half-speed playback to `2.0` for double-speed playback.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/rate
func (AVAudioPlayer) SetCurrentDevice ¶
func (a AVAudioPlayer) SetCurrentDevice(value string)
func (AVAudioPlayer) SetCurrentTime ¶
func (a AVAudioPlayer) SetCurrentTime(value float64)
func (AVAudioPlayer) SetDelegate ¶
func (a AVAudioPlayer) SetDelegate(value AVAudioPlayerDelegate)
func (AVAudioPlayer) SetEnableRate ¶
func (a AVAudioPlayer) SetEnableRate(value bool)
func (AVAudioPlayer) SetIntendedSpatialExperience ¶
func (a AVAudioPlayer) SetIntendedSpatialExperience(value objectivec.IObject)
func (AVAudioPlayer) SetMeteringEnabled ¶
func (a AVAudioPlayer) SetMeteringEnabled(value bool)
func (AVAudioPlayer) SetNumberOfLoops ¶
func (a AVAudioPlayer) SetNumberOfLoops(value int)
func (AVAudioPlayer) SetPan ¶
func (a AVAudioPlayer) SetPan(value float32)
func (AVAudioPlayer) SetRate ¶
func (a AVAudioPlayer) SetRate(value float32)
func (AVAudioPlayer) SetVolume ¶
func (a AVAudioPlayer) SetVolume(value float32)
func (AVAudioPlayer) SetVolumeFadeDuration ¶
func (a AVAudioPlayer) SetVolumeFadeDuration(volume float32, duration float64)
Changes the audio player’s volume over a duration of time.
volume: The target volume.
duration: The duration, in seconds, over which to fade the volume.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/setVolume(_:fadeDuration:)
func (AVAudioPlayer) Settings ¶
func (a AVAudioPlayer) Settings() foundation.INSDictionary
A dictionary that provides information about the player’s audio data.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/settings
func (AVAudioPlayer) Stop ¶
func (a AVAudioPlayer) Stop()
Stops playback and undoes the setup the system requires for playback.
Discussion ¶
Calling this method undoes the resource allocation the system performs in [PrepareToPlay] or [Play]. It doesn’t reset the player’s [CurrentTime] value to `0`, so playback resumes from where it stops.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/stop()
func (AVAudioPlayer) UpdateMeters ¶
func (a AVAudioPlayer) UpdateMeters()
Refreshes the average and peak power values for all channels of an audio player.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/updateMeters()
func (AVAudioPlayer) Url ¶
func (a AVAudioPlayer) Url() foundation.INSURL
The URL of the audio file.
Discussion ¶
This property is nil if you don’t create the player with a URL.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/url
func (AVAudioPlayer) Volume ¶
func (a AVAudioPlayer) Volume() float32
The audio player’s volume relative to other audio output.
Discussion ¶
This property supports values ranging from `0.0` for silence to `1.0` for full volume.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer/volume
type AVAudioPlayerClass ¶
type AVAudioPlayerClass struct {
// contains filtered or unexported fields
}
func GetAVAudioPlayerClass ¶
func GetAVAudioPlayerClass() AVAudioPlayerClass
GetAVAudioPlayerClass returns the class object for AVAudioPlayer.
func (AVAudioPlayerClass) Alloc ¶
func (ac AVAudioPlayerClass) Alloc() AVAudioPlayer
Alloc allocates memory for a new instance of the class.
func (AVAudioPlayerClass) Class ¶
func (ac AVAudioPlayerClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioPlayerDelegate ¶
type AVAudioPlayerDelegate interface {
objectivec.IObject
}
A protocol that defines the methods to respond to audio playback events and decoding errors.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayerDelegate
type AVAudioPlayerDelegateConfig ¶
type AVAudioPlayerDelegateConfig struct {
// Responding to Playback Completion
// AudioPlayerDidFinishPlayingSuccessfully — Tells the delegate when the audio finishes playing.
AudioPlayerDidFinishPlayingSuccessfully func(player AVAudioPlayer, flag bool)
// Responding to Audio Decoding Errors
// AudioPlayerDecodeErrorDidOccurError — Tells the delegate when an audio player encounters a decoding error during playback.
AudioPlayerDecodeErrorDidOccurError func(player AVAudioPlayer, error_ foundation.NSError)
}
AVAudioPlayerDelegateConfig holds optional typed callbacks for AVAudioPlayerDelegate methods. Set non-nil fields to register the corresponding Objective-C delegate method. Methods with nil callbacks are not registered, so [NSObject.RespondsToSelector] returns false for them — matching the Objective-C delegate pattern exactly.
See Apple Documentation for protocol details.
type AVAudioPlayerDelegateObject ¶
type AVAudioPlayerDelegateObject struct {
objectivec.Object
}
AVAudioPlayerDelegateObject wraps an existing Objective-C object that conforms to the AVAudioPlayerDelegate protocol.
func AVAudioPlayerDelegateObjectFromID ¶
func AVAudioPlayerDelegateObjectFromID(id objc.ID) AVAudioPlayerDelegateObject
AVAudioPlayerDelegateObjectFromID constructs a AVAudioPlayerDelegateObject from an objc.ID. The object is determined to conform to the protocol at runtime.
func NewAVAudioPlayerDelegate ¶
func NewAVAudioPlayerDelegate(config AVAudioPlayerDelegateConfig) AVAudioPlayerDelegateObject
NewAVAudioPlayerDelegate creates an Objective-C object implementing the AVAudioPlayerDelegate protocol.
Each call registers a unique Objective-C class containing only the methods set in config. This means [NSObject.RespondsToSelector] works correctly for optional delegate methods — only non-nil callbacks are registered.
The returned AVAudioPlayerDelegateObject satisfies the AVAudioPlayerDelegate interface and can be passed directly to SetDelegate and similar methods.
See Apple Documentation for protocol details.
func (AVAudioPlayerDelegateObject) AudioPlayerDecodeErrorDidOccurError ¶
func (o AVAudioPlayerDelegateObject) AudioPlayerDecodeErrorDidOccurError(player IAVAudioPlayer, error_ foundation.INSError)
Tells the delegate when an audio player encounters a decoding error during playback.
player: The audio player that encounters the decoding error.
error: The decoding error.
func (AVAudioPlayerDelegateObject) AudioPlayerDidFinishPlayingSuccessfully ¶
func (o AVAudioPlayerDelegateObject) AudioPlayerDidFinishPlayingSuccessfully(player IAVAudioPlayer, flag bool)
Tells the delegate when the audio finishes playing.
player: The audio player that finishes playing.
flag: A Boolean value that indicates whether the audio finishes playing successfully.
Discussion ¶
The system doesn’t call this method on an audio interruption.
func (AVAudioPlayerDelegateObject) BaseObject ¶
func (o AVAudioPlayerDelegateObject) BaseObject() objectivec.Object
type AVAudioPlayerNode ¶
type AVAudioPlayerNode struct {
AVAudioNode
}
An object for scheduling the playback of buffers or segments of audio files.
Overview ¶
This audio node supports scheduling the playback of AVAudioPCMBuffer instances, or segments of audio files that you open through AVAudioFile. You can schedule buffers and segments to play at specific points in time or to play immediately following preceding segments.
Generally, you want to configure the node’s output format with the same number of channels as in the files and buffers. Otherwise, the node drops or adds channels as necessary. It’s usually preferable to use an AVAudioMixerNode for this configuration.
Similarly, when playing file segments, the node makes sample rate conversions, if necessary. It’s preferable to configure the node’s output sample rate to match that of the files, and to use a mixer to perform the rate conversion.
When playing buffers, there’s an implicit assumption that the buffers are at the same sample rate as the node’s output format.
The AVAudioPlayerNode.Stop method unschedules all previously scheduled buffers and file segments, and returns the player timeline to sample time `0`.
Player Timeline ¶
The usual AVAudioNode sample times, which [AVAudioPlayerNode.LastRenderTime] observes, have an arbitrary zero point. The AVAudioPlayerNode class superimposes a second player timeline on top of this to reflect when the player starts and intervals when it pauses. The methods AVAudioPlayerNode.NodeTimeForPlayerTime and AVAudioPlayerNode.PlayerTimeForNodeTime convert between the two.
Scheduling Playback Time ¶
The AVAudioPlayerNode.ScheduleBufferAtTimeOptionsCompletionHandler, AVAudioPlayerNode.ScheduleFileAtTimeCompletionHandler, and AVAudioPlayerNode.ScheduleSegmentStartingFrameFrameCountAtTimeCompletionHandler methods take an AVAudioTime `when` parameter, and you interpret it as follows:
- If the `when` parameter is `nil`: - If there are previous commands, the new one plays immediately following the last one. - Otherwise, if the node is in a playing state, the event plays in the very near future. - Otherwise, the command plays at sample time `0`. - If the `when` parameter is a sample time, the parameter interprets it as such. - If the `when` parameter is a host time, the system ignores it unless the sample time is invalid when the engine is rendering to an audio device.
The scheduling methods fail if:
- A buffer’s channel count doesn’t match that of the node’s output format. - The system can’t access a file. - An AVAudioTime doesn’t specify a valid sample time or a host time. - A segment’s start frame or frame count is a negative value.
Handling Buffer or File Completion ¶
The buffer of file completion handlers are a means to schedule more data if available on the player node. For more information on the different completion callback types, see AVAudioPlayerNodeCompletionCallbackType.
Rendering Offline ¶
When you use a player node with the engine operating in manual rendering mode, you use the buffer or file completion handlers — [AVAudioPlayerNode.LastRenderTime], [AVAudioPlayerNode.Latency], and [AVAudioPlayerNode.OutputPresentationLatency] — to track how much data the player rendered and how much remains to render.
Scheduling Playback ¶
- AVAudioPlayerNode.ScheduleFileAtTimeCompletionHandler: Schedules the playing of an entire audio file.
- AVAudioPlayerNode.ScheduleFileAtTimeCompletionCallbackTypeCompletionHandler: Schedules the playing of an entire audio file with a callback option you specify.
- AVAudioPlayerNode.ScheduleSegmentStartingFrameFrameCountAtTimeCompletionHandler: Schedules the playing of an audio file segment.
- AVAudioPlayerNode.ScheduleSegmentStartingFrameFrameCountAtTimeCompletionCallbackTypeCompletionHandler: Schedules the playing of an audio file segment with a callback option you specify.
- AVAudioPlayerNode.ScheduleBufferAtTimeOptionsCompletionHandler: Schedules the playing samples from an audio buffer at the time and playback options you specify.
- AVAudioPlayerNode.ScheduleBufferCompletionHandler: Schedules the playing samples from an audio buffer.
- AVAudioPlayerNode.ScheduleBufferAtTimeOptionsCompletionCallbackTypeCompletionHandler: Schedules the playing samples from an audio buffer with the playback options you specify.
- AVAudioPlayerNode.ScheduleBufferCompletionCallbackTypeCompletionHandler: Schedules the playing samples from an audio buffer with the callback option you specify.
Converting Node and Player Times ¶
- AVAudioPlayerNode.NodeTimeForPlayerTime: Converts from player time to node time.
- AVAudioPlayerNode.PlayerTimeForNodeTime: Converts from node time to player time.
Controlling Playback ¶
- AVAudioPlayerNode.PrepareWithFrameCount: Prepares the file regions or buffers you schedule for playback.
- AVAudioPlayerNode.Play: Starts or resumes playback immediately.
- AVAudioPlayerNode.PlayAtTime: Starts or resumes playback at a time you specify.
- AVAudioPlayerNode.Playing: A Boolean value that indicates whether the player is playing.
- AVAudioPlayerNode.Pause: Pauses the node’s playback.
- AVAudioPlayerNode.Stop: Clears all of the node’s events you schedule and stops playback.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayerNode
func AVAudioPlayerNodeFromID ¶
func AVAudioPlayerNodeFromID(id objc.ID) AVAudioPlayerNode
AVAudioPlayerNodeFromID constructs a AVAudioPlayerNode from an objc.ID.
An object for scheduling the playback of buffers or segments of audio files.
func NewAVAudioPlayerNode ¶
func NewAVAudioPlayerNode() AVAudioPlayerNode
NewAVAudioPlayerNode creates a new AVAudioPlayerNode instance.
func (AVAudioPlayerNode) Autorelease ¶
func (a AVAudioPlayerNode) Autorelease() AVAudioPlayerNode
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioPlayerNode) DestinationForMixerBus ¶
func (a AVAudioPlayerNode) DestinationForMixerBus(mixer IAVAudioNode, bus AVAudioNodeBus) IAVAudioMixingDestination
Gets the audio mixing destination object that corresponds to the specified mixer node and input bus.
mixer: The mixer to get destination details for.
bus: The input bus.
Return Value ¶
Returns `self` if the specified mixer or input bus matches its connection point. If the mixer or input bus doesn’t match its connection point, or if the source node isn’t in a connected state to the mixer or input bus, the method returns `nil.`
Discussion ¶
When you connect a source node to multiple mixers downstream, setting AVAudioMixing properties directly on the source node applies the change to all of them. Use this method to get the corresponding AVAudioMixingDestination for a specific mixer. Properties set on individual destination instances don’t reflect at the source node level.
If there’s any disconnection between the source and mixer nodes, the return value can be invalid. Fetch the return value every time you want to set or get properties on a specific mixer.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing/destination(forMixer:bus:)
func (AVAudioPlayerNode) Init ¶
func (a AVAudioPlayerNode) Init() AVAudioPlayerNode
Init initializes the instance.
func (AVAudioPlayerNode) NodeTimeForPlayerTime ¶
func (a AVAudioPlayerNode) NodeTimeForPlayerTime(playerTime IAVAudioTime) IAVAudioTime
Converts from player time to node time.
playerTime: A time relative to the player’s start time.
Return Value ¶
A node time, or `nil` if the player isn’t playing.
Discussion ¶
For more information about this method and its inverse [PlayerTimeForNodeTime], see AVAudioPlayerNode.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayerNode/nodeTime(forPlayerTime:)
func (AVAudioPlayerNode) Obstruction ¶
func (a AVAudioPlayerNode) Obstruction() float32
A value that simulates filtering of the direct path of sound due to an obstacle.
Discussion ¶
The value of `obstruction` is in decibels. The system blocks only the direct path of sound between the source and listener.
The default value is `0.0`, and the range of valid values is `-100` to `0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/obstruction
func (AVAudioPlayerNode) Occlusion ¶
func (a AVAudioPlayerNode) Occlusion() float32
A value that simulates filtering of the direct and reverb paths of sound due to an obstacle.
Discussion ¶
The value of `obstruction` is in decibels. The system blocks the direct and reverb paths of sound between the source and listener.
The default value is `0.0`, and the range of valid values is `-100` to `0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/occlusion
func (AVAudioPlayerNode) Pan ¶
func (a AVAudioPlayerNode) Pan() float32
The bus’s stereo pan.
Discussion ¶
The default value is `0.0`, and the range of valid values is `-1.0` to `1.0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioStereoMixing/pan
func (AVAudioPlayerNode) Pause ¶
func (a AVAudioPlayerNode) Pause()
Pauses the node’s playback.
Discussion ¶
The player’s sample time doesn’t advance while the node is in a paused state.
Pausing or stopping all of the players you connect to an engine doesn’t pause or stop the engine or the underlying hardware. You must explicitly pause or stop the engine for the hardware to stop. When your app doesn’t need to play audio, pause or stop the engine to minimize power consumption.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayerNode/pause()
func (AVAudioPlayerNode) Play ¶
func (a AVAudioPlayerNode) Play()
Starts or resumes playback immediately.
Discussion ¶
This is equivalent to [PlayAtTime] with a value of `nil`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayerNode/play()
func (AVAudioPlayerNode) PlayAtTime ¶
func (a AVAudioPlayerNode) PlayAtTime(when IAVAudioTime)
Starts or resumes playback at a time you specify.
when: The node time to start or resume playback. Passing `nil` starts playback immediately.
Discussion ¶
This node is initially in a paused state. The framework enqueues your requests to play buffers or file segments, and any necessary decoding begins immediately. Playback doesn’t begin until the player starts playing through this method.
The following example code shows how to start a player `0.5` seconds in the future:
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayerNode/play(at:)
func (AVAudioPlayerNode) PlayerTimeForNodeTime ¶
func (a AVAudioPlayerNode) PlayerTimeForNodeTime(nodeTime IAVAudioTime) IAVAudioTime
Converts from node time to player time.
nodeTime: The node time.
Return Value ¶
A time relative to the player’s start time, or `nil` if the player isn’t playing.
Discussion ¶
For more information about this method and its inverse [NodeTimeForPlayerTime], see AVAudioPlayerNode.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayerNode/playerTime(forNodeTime:)
func (AVAudioPlayerNode) Playing ¶
func (a AVAudioPlayerNode) Playing() bool
A Boolean value that indicates whether the player is playing.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayerNode/isPlaying
func (AVAudioPlayerNode) PointSourceInHeadMode ¶
func (a AVAudioPlayerNode) PointSourceInHeadMode() AVAudio3DMixingPointSourceInHeadMode
The in-head mode for a point source.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/pointSourceInHeadMode
func (AVAudioPlayerNode) Position ¶
func (a AVAudioPlayerNode) Position() AVAudio3DPoint
The location of the source in the 3D environment.
Discussion ¶
The system specifies the coordinates in meters. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/position
func (AVAudioPlayerNode) PrepareWithFrameCount ¶
func (a AVAudioPlayerNode) PrepareWithFrameCount(frameCount AVAudioFrameCount)
Prepares the file regions or buffers you schedule for playback.
frameCount: The number of sample frames of data to prepare.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayerNode/prepare(withFrameCount:)
func (AVAudioPlayerNode) Rate ¶
func (a AVAudioPlayerNode) Rate() float32
A value that changes the playback rate of the input signal.
Discussion ¶
A value of `2.0` results in the output audio playing one octave higher. A value of `0.5` results in the output audio playing one octave lower.
The default value is `1.0`, and the range of valid values is `0.5` to `2.0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/rate
func (AVAudioPlayerNode) RenderingAlgorithm ¶
func (a AVAudioPlayerNode) RenderingAlgorithm() AVAudio3DMixingRenderingAlgorithm
The type of rendering algorithm the mixer uses.
Discussion ¶
Depending on the current output format of the AVAudioEnvironmentNode instance, the system may only support a subset of the rendering algorithms. You can retrieve an array of valid rendering algorithms by calling the [ApplicableRenderingAlgorithms] function of the AVAudioEnvironmentNode instance.
The default rendering algorithm is [Audio3DMixingRenderingAlgorithmEqualPowerPanning]. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/renderingAlgorithm
func (AVAudioPlayerNode) ReverbBlend ¶
func (a AVAudioPlayerNode) ReverbBlend() float32
A value that controls the blend of dry and reverb processed audio.
Discussion ¶
This property controls the amount of the source’s audio that the AVAudioEnvironmentNode instance processes. A value of `0.5` results in an equal blend of dry and processed (wet) audio.
The default is `0.0`, and the range of valid values is `0.0` (completely dry) to `1.0` (completely wet). Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/reverbBlend
func (AVAudioPlayerNode) ScheduleBufferAtTimeOptionsCompletionCallbackTypeCompletionHandler ¶
func (a AVAudioPlayerNode) ScheduleBufferAtTimeOptionsCompletionCallbackTypeCompletionHandler(buffer IAVAudioPCMBuffer, when IAVAudioTime, options AVAudioPlayerNodeBufferOptions, callbackType AVAudioPlayerNodeCompletionCallbackType, completionHandler ErrorHandler)
Schedules the playing samples from an audio buffer with the playback options you specify.
buffer: The buffer to play.
when: The time the buffer plays.
options: The playback options that control buffer scheduling.
callbackType: The option to specify when the system must call the completion handler.
completionHandler: The handler the system calls after the player schedules the buffer for playback on the render thread, or the player stops.
Discussion ¶
func (AVAudioPlayerNode) ScheduleBufferAtTimeOptionsCompletionHandler ¶
func (a AVAudioPlayerNode) ScheduleBufferAtTimeOptionsCompletionHandler(buffer IAVAudioPCMBuffer, when IAVAudioTime, options AVAudioPlayerNodeBufferOptions, completionHandler ErrorHandler)
Schedules the playing samples from an audio buffer at the time and playback options you specify.
buffer: The buffer to play.
when: The time the buffer plays. For more information, see AVAudioPlayerNode.
options: The playback options that control buffer scheduling.
completionHandler: The handler the system calls after the player schedules the buffer for playback on the render thread, or the player stops.
Discussion ¶
func (AVAudioPlayerNode) ScheduleBufferCompletionCallbackTypeCompletionHandler ¶
func (a AVAudioPlayerNode) ScheduleBufferCompletionCallbackTypeCompletionHandler(buffer IAVAudioPCMBuffer, callbackType AVAudioPlayerNodeCompletionCallbackType, completionHandler ErrorHandler)
Schedules the playing samples from an audio buffer with the callback option you specify.
buffer: The buffer to play.
callbackType: The option to specify when the system must call the completion handler.
completionHandler: The handler the system calls after the player schedules the buffer for playback on the render thread, or the player stops.
Discussion ¶
func (AVAudioPlayerNode) ScheduleBufferCompletionHandler ¶
func (a AVAudioPlayerNode) ScheduleBufferCompletionHandler(buffer IAVAudioPCMBuffer, completionHandler ErrorHandler)
Schedules the playing samples from an audio buffer.
buffer: The buffer to play.
completionHandler: The handler the system calls after the player schedules the buffer for playback on the render thread, or the player stops.
Discussion ¶
func (AVAudioPlayerNode) ScheduleFileAtTimeCompletionCallbackTypeCompletionHandler ¶
func (a AVAudioPlayerNode) ScheduleFileAtTimeCompletionCallbackTypeCompletionHandler(file IAVAudioFile, when IAVAudioTime, callbackType AVAudioPlayerNodeCompletionCallbackType, completionHandler ErrorHandler)
Schedules the playing of an entire audio file with a callback option you specify.
file: The file to play.
when: The time the file plays.
callbackType: The option to specify when the system must call the completion handler.
completionHandler: The handler the system calls after the player schedules the file for playback on the render thread, or the player stops.
Discussion ¶
func (AVAudioPlayerNode) ScheduleFileAtTimeCompletionHandler ¶
func (a AVAudioPlayerNode) ScheduleFileAtTimeCompletionHandler(file IAVAudioFile, when IAVAudioTime, completionHandler ErrorHandler)
Schedules the playing of an entire audio file.
file: The URL of the file to play.
when: The time the buffer plays. For more information, see AVAudioPlayerNode.
completionHandler: The handler the system calls after the player schedules the file for playback on the render thread, or the player stops.
Discussion ¶
func (AVAudioPlayerNode) ScheduleSegmentStartingFrameFrameCountAtTimeCompletionCallbackTypeCompletionHandler ¶
func (a AVAudioPlayerNode) ScheduleSegmentStartingFrameFrameCountAtTimeCompletionCallbackTypeCompletionHandler(file IAVAudioFile, startFrame AVAudioFramePosition, numberFrames AVAudioFrameCount, when IAVAudioTime, callbackType AVAudioPlayerNodeCompletionCallbackType, completionHandler ErrorHandler)
Schedules the playing of an audio file segment with a callback option you specify.
file: The file to play.
startFrame: The starting frame position in the stream.
numberFrames: The number of frames to play.
when: The time the region plays.
callbackType: The option to specify when the system must call the completion handler.
completionHandler: The handler the system calls after the player schedules the segment for playback on the render thread, or the player stops.
Discussion ¶
func (AVAudioPlayerNode) ScheduleSegmentStartingFrameFrameCountAtTimeCompletionHandler ¶
func (a AVAudioPlayerNode) ScheduleSegmentStartingFrameFrameCountAtTimeCompletionHandler(file IAVAudioFile, startFrame AVAudioFramePosition, numberFrames AVAudioFrameCount, when IAVAudioTime, completionHandler ErrorHandler)
Schedules the playing of an audio file segment.
file: The URL of the file to play.
startFrame: The starting frame position in the stream.
numberFrames: The number of frames to play.
when: The time the buffer plays. For more information, see AVAudioPlayerNode.
completionHandler: The handler the system calls after the player schedules the segment for playback on the render thread, or the player stops.
Discussion ¶
func (AVAudioPlayerNode) SetObstruction ¶
func (a AVAudioPlayerNode) SetObstruction(value float32)
func (AVAudioPlayerNode) SetOcclusion ¶
func (a AVAudioPlayerNode) SetOcclusion(value float32)
func (AVAudioPlayerNode) SetPan ¶
func (a AVAudioPlayerNode) SetPan(value float32)
func (AVAudioPlayerNode) SetPointSourceInHeadMode ¶
func (a AVAudioPlayerNode) SetPointSourceInHeadMode(value AVAudio3DMixingPointSourceInHeadMode)
func (AVAudioPlayerNode) SetPosition ¶
func (a AVAudioPlayerNode) SetPosition(value AVAudio3DPoint)
func (AVAudioPlayerNode) SetRate ¶
func (a AVAudioPlayerNode) SetRate(value float32)
func (AVAudioPlayerNode) SetRenderingAlgorithm ¶
func (a AVAudioPlayerNode) SetRenderingAlgorithm(value AVAudio3DMixingRenderingAlgorithm)
func (AVAudioPlayerNode) SetReverbBlend ¶
func (a AVAudioPlayerNode) SetReverbBlend(value float32)
func (AVAudioPlayerNode) SetSourceMode ¶
func (a AVAudioPlayerNode) SetSourceMode(value AVAudio3DMixingSourceMode)
func (AVAudioPlayerNode) SetVolume ¶
func (a AVAudioPlayerNode) SetVolume(value float32)
func (AVAudioPlayerNode) SourceMode ¶
func (a AVAudioPlayerNode) SourceMode() AVAudio3DMixingSourceMode
The source mode for the input bus of the audio environment node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/sourceMode
func (AVAudioPlayerNode) Stop ¶
func (a AVAudioPlayerNode) Stop()
Clears all of the node’s events you schedule and stops playback.
Discussion ¶
Clears all events you schedule, including any events in the middle of playing. It resets the node’s sample time to `0`, and doesn’t proceed until the node starts again through [Play] or [PlayAtTime].
Pausing or stopping all of the players you connect to an engine doesn’t pause or stop the engine or the underlying hardware. You must explicitly pause or stop the engine for the hardware to stop. When your app doesn’t need to play audio, pause or stop the engine to minimize power consumption.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayerNode/stop()
func (AVAudioPlayerNode) Volume ¶
func (a AVAudioPlayerNode) Volume() float32
The bus’s input volume.
Discussion ¶
The default value is `1.0`, and the range of valid values is `0.0` to `1.0`. Only the AVAudioEnvironmentNode and the AVAudioMixerNode implement this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing/volume
type AVAudioPlayerNodeBufferOptions ¶
type AVAudioPlayerNodeBufferOptions uint
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayerNodeBufferOptions
const ( // AVAudioPlayerNodeBufferInterrupts: An option that indicates the buffer interrupts any buffer in a playing state. AVAudioPlayerNodeBufferInterrupts AVAudioPlayerNodeBufferOptions = 2 // AVAudioPlayerNodeBufferInterruptsAtLoop: An option that indicates the buffer interrupts any buffer in a playing state at its loop point. AVAudioPlayerNodeBufferInterruptsAtLoop AVAudioPlayerNodeBufferOptions = 4 // AVAudioPlayerNodeBufferLoops: An option that indicates the buffer loops indefinitely. AVAudioPlayerNodeBufferLoops AVAudioPlayerNodeBufferOptions = 1 )
func (AVAudioPlayerNodeBufferOptions) String ¶
func (e AVAudioPlayerNodeBufferOptions) String() string
type AVAudioPlayerNodeClass ¶
type AVAudioPlayerNodeClass struct {
// contains filtered or unexported fields
}
func GetAVAudioPlayerNodeClass ¶
func GetAVAudioPlayerNodeClass() AVAudioPlayerNodeClass
GetAVAudioPlayerNodeClass returns the class object for AVAudioPlayerNode.
func (AVAudioPlayerNodeClass) Alloc ¶
func (ac AVAudioPlayerNodeClass) Alloc() AVAudioPlayerNode
Alloc allocates memory for a new instance of the class.
func (AVAudioPlayerNodeClass) Class ¶
func (ac AVAudioPlayerNodeClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioPlayerNodeCompletionCallbackType ¶
type AVAudioPlayerNodeCompletionCallbackType int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayerNodeCompletionCallbackType
const ( // AVAudioPlayerNodeCompletionDataConsumed: A completion handler that indicates the player consumes the buffer or file data. AVAudioPlayerNodeCompletionDataConsumed AVAudioPlayerNodeCompletionCallbackType = 0 // AVAudioPlayerNodeCompletionDataPlayedBack: A completion handler that indicates the player finishes the buffer or file data. AVAudioPlayerNodeCompletionDataPlayedBack AVAudioPlayerNodeCompletionCallbackType = 2 // AVAudioPlayerNodeCompletionDataRendered: A completion handler that indicates the player renders the buffer or file data. AVAudioPlayerNodeCompletionDataRendered AVAudioPlayerNodeCompletionCallbackType = 1 )
func (AVAudioPlayerNodeCompletionCallbackType) String ¶
func (e AVAudioPlayerNodeCompletionCallbackType) String() string
type AVAudioPlayerNodeCompletionHandler ¶
type AVAudioPlayerNodeCompletionHandler = func(AVAudioPlayerNodeCompletionCallbackType)
AVAudioPlayerNodeCompletionHandler is the callback handler for buffer or file completion.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayerNodeCompletionHandler
type AVAudioQuality ¶
type AVAudioQuality int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioQuality
const ( // AVAudioQualityHigh: A value that represents a high audio quality for encoding and conversion. AVAudioQualityHigh AVAudioQuality = 96 // AVAudioQualityLow: A value that represents a low audio quality for encoding and conversion. AVAudioQualityLow AVAudioQuality = 32 // AVAudioQualityMax: A value that represents a maximum audio quality for encoding and conversion. AVAudioQualityMax AVAudioQuality = 127 // AVAudioQualityMedium: A value that represents a medium audio quality for encoding and conversion. AVAudioQualityMedium AVAudioQuality = 64 // AVAudioQualityMin: A value that represents a minimum audio quality for encoding and conversion. AVAudioQualityMin AVAudioQuality = 0 )
func (AVAudioQuality) String ¶
func (e AVAudioQuality) String() string
type AVAudioRecorder ¶
type AVAudioRecorder struct {
objectivec.Object
}
An object that records audio data to a file.
Overview ¶
Use an audio recorder to:
- Record audio from the system’s active input device - Record for a specified duration or until the user stops recording - Pause and resume a recording - Access recording-level metering data
To record audio in iOS or tvOS, configure your audio session to use the record or playAndRecord category.
Creating an audio recorder ¶
- AVAudioRecorder.InitWithURLSettingsError: Creates an audio recorder with settings.
- AVAudioRecorder.InitWithURLFormatError: Creates an audio recorder with an audio format.
Controlling recording ¶
- AVAudioRecorder.PrepareToRecord: Creates an audio file and prepares the system for recording.
- AVAudioRecorder.RecordAtTime: Records audio starting at a specific time.
- AVAudioRecorder.RecordForDuration: Records audio for the indicated duration of time.
- AVAudioRecorder.RecordAtTimeForDuration: Records audio starting at a specific time for the indicated duration.
- AVAudioRecorder.Pause: Pauses an audio recording.
- AVAudioRecorder.Stop: Stops recording and closes the audio file.
- AVAudioRecorder.Recording: A Boolean value that indicates whether the audio recorder is recording.
- AVAudioRecorder.DeleteRecording: Deletes a recorded audio file.
Accessing recorder timing ¶
- AVAudioRecorder.CurrentTime: The time, in seconds, since the beginning of the recording.
- AVAudioRecorder.DeviceCurrentTime: The time, in seconds, of the host audio device.
Managing audio-level metering ¶
- AVAudioRecorder.MeteringEnabled: A Boolean value that indicates whether you’ve enabled the recorder to generate audio-level metering data.
- AVAudioRecorder.SetMeteringEnabled
- AVAudioRecorder.UpdateMeters: Refreshes the average and peak power values for all channels of an audio recorder.
- AVAudioRecorder.AveragePowerForChannel: Returns the average power, in decibels full-scale (dBFS), for an audio channel.
- AVAudioRecorder.PeakPowerForChannel: Returns the peak power, in decibels full-scale (dBFS), for an audio channel.
Responding to recorder events ¶
- AVAudioRecorder.Delegate: The delegate object for the audio recorder.
- AVAudioRecorder.SetDelegate
Inspecting the audio data ¶
- AVAudioRecorder.Url: The URL to which the recorder writes its data.
- AVAudioRecorder.Format: The format of the recorded audio.
- AVAudioRecorder.Settings: The settings that describe the format of the recorded audio.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder
func AVAudioRecorderFromID ¶
func AVAudioRecorderFromID(id objc.ID) AVAudioRecorder
AVAudioRecorderFromID constructs a AVAudioRecorder from an objc.ID.
An object that records audio data to a file.
func NewAVAudioRecorder ¶
func NewAVAudioRecorder() AVAudioRecorder
NewAVAudioRecorder creates a new AVAudioRecorder instance.
func NewAudioRecorderWithURLFormatError ¶
func NewAudioRecorderWithURLFormatError(url foundation.INSURL, format IAVAudioFormat) (AVAudioRecorder, error)
Creates an audio recorder with an audio format.
url: The file system location to record to.
format: The audio format to use for the recording.
Return Value ¶
A new audio recorder, or nil if an error occurred.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/init(url:format:)
func NewAudioRecorderWithURLSettingsError ¶
func NewAudioRecorderWithURLSettingsError(url foundation.INSURL, settings foundation.INSDictionary) (AVAudioRecorder, error)
Creates an audio recorder with settings.
url: The file system location to record to.
settings: The audio settings to use for the recording.
Return Value ¶
A new audio recorder, or nil if an error occurred.
Discussion ¶
The system supports the following keys when defining the format settings:
[Table data omitted]
The system supports additional configuration options based on your selected audio format. See Linear PCM format settings for information about customizing Linear PCM formats and Encoder settings for compressed formats.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/init(url:settings:)
func (AVAudioRecorder) Autorelease ¶
func (a AVAudioRecorder) Autorelease() AVAudioRecorder
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioRecorder) AveragePowerForChannel ¶
func (a AVAudioRecorder) AveragePowerForChannel(channelNumber uint) float32
Returns the average power, in decibels full-scale (dBFS), for an audio channel.
channelNumber: The number of the channel that you want the average power value for.
Return Value ¶
The audio channel’s current average power.
Discussion ¶
Before asking the player for its average power value, you must call [UpdateMeters] to generate the latest data. The returned value ranges from `–160` dBFS, indicating minimum power, to 0 dBFS, indicating maximum power.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/averagePower(forChannel:)
func (AVAudioRecorder) CurrentTime ¶
func (a AVAudioRecorder) CurrentTime() float64
The time, in seconds, since the beginning of the recording.
Discussion ¶
The value of this property is `0` when you call it on a stopped audio recorder.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/currentTime
func (AVAudioRecorder) Delegate ¶
func (a AVAudioRecorder) Delegate() AVAudioRecorderDelegate
The delegate object for the audio recorder.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/delegate
func (AVAudioRecorder) DeleteRecording ¶
func (a AVAudioRecorder) DeleteRecording() bool
Deletes a recorded audio file.
Return Value ¶
true if the system deleted the file; otherwise, false.
Discussion ¶
You must stop the audio recorder before calling this method.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/deleteRecording()
func (AVAudioRecorder) DeviceCurrentTime ¶
func (a AVAudioRecorder) DeviceCurrentTime() float64
The time, in seconds, of the host audio device.
Discussion ¶
Use this property value to schedule audio recording using the [RecordAtTime] and [RecordAtTimeForDuration] methods.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/deviceCurrentTime
func (AVAudioRecorder) Format ¶
func (a AVAudioRecorder) Format() IAVAudioFormat
The format of the recorded audio.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/format
func (AVAudioRecorder) Init ¶
func (a AVAudioRecorder) Init() AVAudioRecorder
Init initializes the instance.
func (AVAudioRecorder) InitWithURLFormatError ¶
func (a AVAudioRecorder) InitWithURLFormatError(url foundation.INSURL, format IAVAudioFormat) (AVAudioRecorder, error)
Creates an audio recorder with an audio format.
url: The file system location to record to.
format: The audio format to use for the recording.
Return Value ¶
A new audio recorder, or nil if an error occurred.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/init(url:format:)
func (AVAudioRecorder) InitWithURLSettingsError ¶
func (a AVAudioRecorder) InitWithURLSettingsError(url foundation.INSURL, settings foundation.INSDictionary) (AVAudioRecorder, error)
Creates an audio recorder with settings.
url: The file system location to record to.
settings: The audio settings to use for the recording.
Return Value ¶
A new audio recorder, or nil if an error occurred.
Discussion ¶
The system supports the following keys when defining the format settings:
[Table data omitted]
The system supports additional configuration options based on your selected audio format. See Linear PCM format settings for information about customizing Linear PCM formats and Encoder settings for compressed formats.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/init(url:settings:)
func (AVAudioRecorder) MeteringEnabled ¶
func (a AVAudioRecorder) MeteringEnabled() bool
A Boolean value that indicates whether you’ve enabled the recorder to generate audio-level metering data.
Discussion ¶
By default, the recorder doesn’t generate audio-level metering data. Because metering uses computing resources, enable it only if you intend to use it.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/isMeteringEnabled
func (AVAudioRecorder) Pause ¶
func (a AVAudioRecorder) Pause()
Pauses an audio recording.
Discussion ¶
Call [Record] to resume recording.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/pause()
func (AVAudioRecorder) PeakPowerForChannel ¶
func (a AVAudioRecorder) PeakPowerForChannel(channelNumber uint) float32
Returns the peak power, in decibels full-scale (dBFS), for an audio channel.
channelNumber: The number of the channel that you want the peak power value for.
Return Value ¶
The audio channel’s current peak power.
Discussion ¶
Before asking the player for its peak power value, you must call [UpdateMeters] to generate the latest data. The returned value ranges from `–160` dBFS, indicating minimum power, to 0 dBFS, indicating maximum power.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/peakPower(forChannel:)
func (AVAudioRecorder) PlayAndRecord ¶
func (a AVAudioRecorder) PlayAndRecord() objc.ID
The category for recording (input) and playback (output) of audio, such as for a Voice over Internet Protocol (VoIP) app.
func (AVAudioRecorder) PrepareToRecord ¶
func (a AVAudioRecorder) PrepareToRecord() bool
Creates an audio file and prepares the system for recording.
Return Value ¶
true if successful; otherwise, false.
Discussion ¶
Calling this method creates an audio file at the URL you used to create the recorder. If a file already exists at that location, this method overwrites it.
Call this method to start recording as quickly as possible upon calling [Record].
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/prepareToRecord()
func (AVAudioRecorder) Record ¶
func (a AVAudioRecorder) Record() objc.ID
The category for recording audio while also silencing playback audio.
See: https://developer.apple.com/documentation/avfaudio/avaudiosession/category-swift.struct/record
func (AVAudioRecorder) RecordAtTime ¶
func (a AVAudioRecorder) RecordAtTime(time float64) bool
Records audio starting at a specific time.
time: The time at which to start recording, relative to [DeviceCurrentTime].
Return Value ¶
true if recording starts successfully; otherwise, false.
Discussion ¶
You can call this method on a single recorder, or use it to synchronize the recording of multiple players as shown below.
Calling this method implicitly calls [PrepareToRecord], which creates an audio file and prepares the system for recording.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/record(atTime:)
func (AVAudioRecorder) RecordAtTimeForDuration ¶
func (a AVAudioRecorder) RecordAtTimeForDuration(time float64, duration float64) bool
Records audio starting at a specific time for the indicated duration.
time: The time at which to start recording, relative to [DeviceCurrentTime].
duration: The duration of time to record, in seconds.
Return Value ¶
true if recording was successful; otherwise, false.
Discussion ¶
The recorder automatically stops recording when it reaches the indicated duration. You may also use it to synchronize recording of multiple recorders as shown below.
Calling this method implicitly calls [PrepareToRecord], which creates an audio file and prepares the system for recording.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/record(atTime:forDuration:)
func (AVAudioRecorder) RecordForDuration ¶
func (a AVAudioRecorder) RecordForDuration(duration float64) bool
Records audio for the indicated duration of time.
duration: The duration of time to record, in seconds.
Return Value ¶
true if successful; otherwise false.
Discussion ¶
The recorder stops recording when it reaches the indicated duration.
Calling this method implicitly calls [PrepareToRecord], which creates an audio file and prepares the system for recording.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/record(forDuration:)
func (AVAudioRecorder) Recording ¶
func (a AVAudioRecorder) Recording() bool
A Boolean value that indicates whether the audio recorder is recording.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/isRecording
func (AVAudioRecorder) SetDelegate ¶
func (a AVAudioRecorder) SetDelegate(value AVAudioRecorderDelegate)
func (AVAudioRecorder) SetMeteringEnabled ¶
func (a AVAudioRecorder) SetMeteringEnabled(value bool)
func (AVAudioRecorder) Settings ¶
func (a AVAudioRecorder) Settings() foundation.INSDictionary
The settings that describe the format of the recorded audio.
Discussion ¶
See [InitWithURLSettingsError] for supported keys and values.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/settings
func (AVAudioRecorder) Stop ¶
func (a AVAudioRecorder) Stop()
Stops recording and closes the audio file.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/stop()
func (AVAudioRecorder) UpdateMeters ¶
func (a AVAudioRecorder) UpdateMeters()
Refreshes the average and peak power values for all channels of an audio recorder.
Discussion ¶
Call this method to update the level meter data before calling [AveragePowerForChannel] or [PeakPowerForChannel].
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/updateMeters()
func (AVAudioRecorder) Url ¶
func (a AVAudioRecorder) Url() foundation.INSURL
The URL to which the recorder writes its data.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder/url
type AVAudioRecorderClass ¶
type AVAudioRecorderClass struct {
// contains filtered or unexported fields
}
func GetAVAudioRecorderClass ¶
func GetAVAudioRecorderClass() AVAudioRecorderClass
GetAVAudioRecorderClass returns the class object for AVAudioRecorder.
func (AVAudioRecorderClass) Alloc ¶
func (ac AVAudioRecorderClass) Alloc() AVAudioRecorder
Alloc allocates memory for a new instance of the class.
func (AVAudioRecorderClass) Class ¶
func (ac AVAudioRecorderClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioRecorderDelegate ¶
type AVAudioRecorderDelegate interface {
objectivec.IObject
}
A protocol that defines the methods to respond to audio recording events and encoding errors.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorderDelegate
type AVAudioRecorderDelegateConfig ¶
type AVAudioRecorderDelegateConfig struct {
// Responding to Recording Completion
// AudioRecorderDidFinishRecordingSuccessfully — Tells the delegate when recording stops or finishes due to reaching its time limit.
AudioRecorderDidFinishRecordingSuccessfully func(recorder AVAudioRecorder, flag bool)
// Responding to Audio Encoding Errors
// AudioRecorderEncodeErrorDidOccurError — Tells the delegate that the audio recorder encountered an encoding error during recording.
AudioRecorderEncodeErrorDidOccurError func(recorder AVAudioRecorder, error_ foundation.NSError)
}
AVAudioRecorderDelegateConfig holds optional typed callbacks for AVAudioRecorderDelegate methods. Set non-nil fields to register the corresponding Objective-C delegate method. Methods with nil callbacks are not registered, so [NSObject.RespondsToSelector] returns false for them — matching the Objective-C delegate pattern exactly.
See Apple Documentation for protocol details.
type AVAudioRecorderDelegateObject ¶
type AVAudioRecorderDelegateObject struct {
objectivec.Object
}
AVAudioRecorderDelegateObject wraps an existing Objective-C object that conforms to the AVAudioRecorderDelegate protocol.
func AVAudioRecorderDelegateObjectFromID ¶
func AVAudioRecorderDelegateObjectFromID(id objc.ID) AVAudioRecorderDelegateObject
AVAudioRecorderDelegateObjectFromID constructs a AVAudioRecorderDelegateObject from an objc.ID. The object is determined to conform to the protocol at runtime.
func NewAVAudioRecorderDelegate ¶
func NewAVAudioRecorderDelegate(config AVAudioRecorderDelegateConfig) AVAudioRecorderDelegateObject
NewAVAudioRecorderDelegate creates an Objective-C object implementing the AVAudioRecorderDelegate protocol.
Each call registers a unique Objective-C class containing only the methods set in config. This means [NSObject.RespondsToSelector] works correctly for optional delegate methods — only non-nil callbacks are registered.
The returned AVAudioRecorderDelegateObject satisfies the AVAudioRecorderDelegate interface and can be passed directly to SetDelegate and similar methods.
See Apple Documentation for protocol details.
func (AVAudioRecorderDelegateObject) AudioRecorderDidFinishRecordingSuccessfully ¶
func (o AVAudioRecorderDelegateObject) AudioRecorderDidFinishRecordingSuccessfully(recorder IAVAudioRecorder, flag bool)
Tells the delegate when recording stops or finishes due to reaching its time limit.
recorder: The audio recorder that finished recording.
flag: A Boolean value that indicates whether the recording stopped successfully.
Discussion ¶
The system doesn’t call this method if the recorder stops due to an interruption.
func (AVAudioRecorderDelegateObject) AudioRecorderEncodeErrorDidOccurError ¶
func (o AVAudioRecorderDelegateObject) AudioRecorderEncodeErrorDidOccurError(recorder IAVAudioRecorder, error_ foundation.INSError)
Tells the delegate that the audio recorder encountered an encoding error during recording.
recorder: The audio recorder that encountered the encoding error.
error: An object that provides the details of the encoding error.
func (AVAudioRecorderDelegateObject) BaseObject ¶
func (o AVAudioRecorderDelegateObject) BaseObject() objectivec.Object
type AVAudioRoutingArbiter ¶
type AVAudioRoutingArbiter struct {
objectivec.Object
}
An object for configuring macOS apps to participate in AirPods Automatic Switching.
Overview ¶
AirPods Automatic Switching is a feature of Apple operating systems that intelligently connects wireless headphones to the most appropriate audio device in a multidevice environment. For example, if a user plays a movie on iPad, and then locks the device and starts playing music on iPhone, the system automatically switches the source audio device from iPad to iPhone.
iOS apps automatically participate in AirPods Automatic Switching. To enable your macOS app to participate in this behavior, use AVAudioRoutingArbiter to indicate when your app starts and finishes playing or recording audio. For example, a Voice over IP (VoIP) app might request arbitration before starting a call, and when the arbitration completes, begin the VoIP session. Likewise, when the call ends, the app would end the VoIP session and leave arbitration.
Participating in AirPods Automatic Switching ¶
- AVAudioRoutingArbiter.BeginArbitrationWithCategoryCompletionHandler: Begins routing arbitration to take ownership of a nearby Bluetooth audio route.
- AVAudioRoutingArbiter.LeaveArbitration: Stops an app’s participation in audio routing arbitration.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRoutingArbiter
func AVAudioRoutingArbiterFromID ¶
func AVAudioRoutingArbiterFromID(id objc.ID) AVAudioRoutingArbiter
AVAudioRoutingArbiterFromID constructs a AVAudioRoutingArbiter from an objc.ID.
An object for configuring macOS apps to participate in AirPods Automatic Switching.
func NewAVAudioRoutingArbiter ¶
func NewAVAudioRoutingArbiter() AVAudioRoutingArbiter
NewAVAudioRoutingArbiter creates a new AVAudioRoutingArbiter instance.
func (AVAudioRoutingArbiter) Autorelease ¶
func (a AVAudioRoutingArbiter) Autorelease() AVAudioRoutingArbiter
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioRoutingArbiter) BeginArbitrationWithCategory ¶
func (a AVAudioRoutingArbiter) BeginArbitrationWithCategory(ctx context.Context, category AVAudioRoutingArbitrationCategory) (bool, error)
BeginArbitrationWithCategory is a synchronous wrapper around AVAudioRoutingArbiter.BeginArbitrationWithCategoryCompletionHandler. It blocks until the completion handler fires or the context is cancelled.
func (AVAudioRoutingArbiter) BeginArbitrationWithCategoryCompletionHandler ¶
func (a AVAudioRoutingArbiter) BeginArbitrationWithCategoryCompletionHandler(category AVAudioRoutingArbitrationCategory, handler BoolErrorHandler)
Begins routing arbitration to take ownership of a nearby Bluetooth audio route.
category: A category that describes how the app uses audio.
handler: A completion handler the system calls asynchronously when the system completes audio routing arbitration. This closure takes the following parameters:
defaultDeviceChanged: A Boolean value that indicates whether the system switched the AirPods to the macOS device. error: An error object that indicates why the request failed, or [nil] if the request succeeded. // [nil]: https://developer.apple.com/documentation/ObjectiveC/nil-227m0
Discussion ¶
Call this method to tell the operating system to arbitrate with nearby Apple devices to take ownership of a supported Bluetooth audio device. When arbitration completes, the system calls the completion handler, passing a Boolean that indicates whether the audio device changed. In either case, begin using audio as normal.
func (AVAudioRoutingArbiter) Init ¶
func (a AVAudioRoutingArbiter) Init() AVAudioRoutingArbiter
Init initializes the instance.
func (AVAudioRoutingArbiter) LeaveArbitration ¶
func (a AVAudioRoutingArbiter) LeaveArbitration()
Stops an app’s participation in audio routing arbitration.
Discussion ¶
Configure your app to notify the system when the app stops using audio for an undetermined duration. For example, for a Voice over IP (VoIP) app, call this method when the VoIP call ends. Calling this method allows the system to make an informed decision when multiple Apple devices are trying to take ownership of a Bluetooth audio route.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRoutingArbiter/leave()
type AVAudioRoutingArbiterClass ¶
type AVAudioRoutingArbiterClass struct {
// contains filtered or unexported fields
}
func GetAVAudioRoutingArbiterClass ¶
func GetAVAudioRoutingArbiterClass() AVAudioRoutingArbiterClass
GetAVAudioRoutingArbiterClass returns the class object for AVAudioRoutingArbiter.
func (AVAudioRoutingArbiterClass) Alloc ¶
func (ac AVAudioRoutingArbiterClass) Alloc() AVAudioRoutingArbiter
Alloc allocates memory for a new instance of the class.
func (AVAudioRoutingArbiterClass) Class ¶
func (ac AVAudioRoutingArbiterClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
func (AVAudioRoutingArbiterClass) SharedRoutingArbiter ¶
func (_AVAudioRoutingArbiterClass AVAudioRoutingArbiterClass) SharedRoutingArbiter() AVAudioRoutingArbiter
The shared routing arbiter object.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRoutingArbiter/shared
type AVAudioRoutingArbitrationCategory ¶
type AVAudioRoutingArbitrationCategory int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRoutingArbiter/Category
const ( // AVAudioRoutingArbitrationCategoryPlayAndRecord: The app plays and records audio. AVAudioRoutingArbitrationCategoryPlayAndRecord AVAudioRoutingArbitrationCategory = 1 // AVAudioRoutingArbitrationCategoryPlayAndRecordVoice: The app uses Voice over IP (VoIP). AVAudioRoutingArbitrationCategoryPlayAndRecordVoice AVAudioRoutingArbitrationCategory = 2 // AVAudioRoutingArbitrationCategoryPlayback: The app plays audio. AVAudioRoutingArbitrationCategoryPlayback AVAudioRoutingArbitrationCategory = 0 )
func (AVAudioRoutingArbitrationCategory) String ¶
func (e AVAudioRoutingArbitrationCategory) String() string
type AVAudioSequencer ¶
type AVAudioSequencer struct {
objectivec.Object
}
An object that plays audio from a collection of MIDI events the system organizes into music tracks.
Creating an Audio Sequencer ¶
- AVAudioSequencer.InitWithAudioEngine: Creates an audio sequencer that the framework attaches to an audio engine instance.
Writing to a MIDI File ¶
- AVAudioSequencer.WriteToURLSMPTEResolutionReplaceExistingError: Creates and writes a MIDI file from the events in the sequence.
Handling Music Tracks ¶
- AVAudioSequencer.CreateAndAppendTrack: Creates a new music track and appends it to the sequencer’s list.
- AVAudioSequencer.ReverseEvents: Reverses the order of all events in all music tracks, including the tempo track.
- AVAudioSequencer.RemoveTrack: Removes the music track from the sequencer.
Managing Sequence Load Options ¶
- AVAudioSequencer.LoadFromDataOptionsError: Parses the data and adds its events to the sequence.
- AVAudioSequencer.LoadFromURLOptionsError: Loads the file the URL references and adds the events to the sequence.
Operating an Audio Sequencer ¶
- AVAudioSequencer.PrepareToPlay: Gets ready to play the sequence by prerolling all events.
- AVAudioSequencer.StartAndReturnError: Starts the sequencer’s player.
- AVAudioSequencer.Stop: Stops the sequencer’s player.
Managing Time Stamps ¶
- AVAudioSequencer.HostTimeForBeatsError: Gets the host time the sequence plays at the specified position.
- AVAudioSequencer.SecondsForBeats: Gets the time for the specified beat position (timestamp) in the track, in seconds.
Handling Beat Range ¶
- AVAudioSequencer.BeatsForHostTimeError: Gets the beat the system plays at the specified host time.
- AVAudioSequencer.BeatsForSeconds: Gets the beat position (timestamp) for the specified time in the track.
- AVAudioSequencer.AVMusicTimeStampEndOfTrack: A timestamp you use to access all events in a music track through a beat range.
- AVAudioSequencer.SetAVMusicTimeStampEndOfTrack
Setting the User Callback ¶
- AVAudioSequencer.SetUserCallback: Adds a callback that the sequencer calls each time it encounters a user event during playback.
Getting Sequence Properties ¶
- AVAudioSequencer.Playing: A Boolean value that indicates whether the sequencer’s player is in a playing state.
- AVAudioSequencer.Rate: The playback rate of the sequencer’s player.
- AVAudioSequencer.SetRate
- AVAudioSequencer.Tracks: An array that contains all the tracks in the sequence.
- AVAudioSequencer.CurrentPositionInBeats: The current playback position, in beats.
- AVAudioSequencer.SetCurrentPositionInBeats
- AVAudioSequencer.CurrentPositionInSeconds: The current playback position, in seconds.
- AVAudioSequencer.SetCurrentPositionInSeconds
- AVAudioSequencer.TempoTrack: The track that contains tempo information about the sequence.
- AVAudioSequencer.UserInfo: A dictionary that contains metadata from a sequence.
- AVAudioSequencer.DataWithSMPTEResolutionError: Gets a data object that contains the events from the sequence.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer
func AVAudioSequencerFromID ¶
func AVAudioSequencerFromID(id objc.ID) AVAudioSequencer
AVAudioSequencerFromID constructs a AVAudioSequencer from an objc.ID.
An object that plays audio from a collection of MIDI events the system organizes into music tracks.
func NewAVAudioSequencer ¶
func NewAVAudioSequencer() AVAudioSequencer
NewAVAudioSequencer creates a new AVAudioSequencer instance.
func NewAudioSequencerWithAudioEngine ¶
func NewAudioSequencerWithAudioEngine(engine IAVAudioEngine) AVAudioSequencer
Creates an audio sequencer that the framework attaches to an audio engine instance.
engine: The engine to attach to.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/init(audioEngine:)
func (AVAudioSequencer) AVMusicTimeStampEndOfTrack ¶
func (a AVAudioSequencer) AVMusicTimeStampEndOfTrack() float64
A timestamp you use to access all events in a music track through a beat range.
See: https://developer.apple.com/documentation/avfaudio/avmusictimestampendoftrack
func (AVAudioSequencer) Autorelease ¶
func (a AVAudioSequencer) Autorelease() AVAudioSequencer
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioSequencer) BeatsForHostTimeError ¶
func (a AVAudioSequencer) BeatsForHostTimeError(inHostTime uint64) (AVMusicTimeStamp, error)
Gets the beat the system plays at the specified host time.
inHostTime: The host time for the beat position.
outError: On exit, if an error occurs, a description of the error.
Discussion ¶
This call is valid when the player is in a playing state. It returns `0` with an error, otherwise, or if the starting position of the player is after the specified host time. This method uses the sequence’s tempo map to retrieve a beat time from the specified host time.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/beats(forHostTime:error:)
func (AVAudioSequencer) BeatsForSeconds ¶
func (a AVAudioSequencer) BeatsForSeconds(seconds float64) AVMusicTimeStamp
Gets the beat position (timestamp) for the specified time in the track.
seconds: The time to retrieve the beat timestamp for.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/beats(forSeconds:)
func (AVAudioSequencer) CreateAndAppendTrack ¶
func (a AVAudioSequencer) CreateAndAppendTrack() IAVMusicTrack
Creates a new music track and appends it to the sequencer’s list.
Return Value ¶
A new music track appended to the sequencer.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/createAndAppendTrack()
func (AVAudioSequencer) CurrentPositionInBeats ¶
func (a AVAudioSequencer) CurrentPositionInBeats() float64
The current playback position, in beats.
Discussion ¶
Setting this property positions the sequencer’s player to the specified beat. You can update this property while the player is in a playing state, in which case, playback resumes at the new position.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/currentPositionInBeats
func (AVAudioSequencer) CurrentPositionInSeconds ¶
func (a AVAudioSequencer) CurrentPositionInSeconds() float64
The current playback position, in seconds.
Discussion ¶
This property positions the sequencer’s player to the specified time. You can update this property while the player is in a playing state, in which case, playback resumes at the new position.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/currentPositionInSeconds
func (AVAudioSequencer) DataWithSMPTEResolutionError ¶
func (a AVAudioSequencer) DataWithSMPTEResolutionError(SMPTEResolution int) (foundation.INSData, error)
Gets a data object that contains the events from the sequence.
SMPTEResolution: The relationship between tick and quarter note for saving to a Standard MIDI File. Pass `0` to use the default.
outError: On exit, if an error occurs, a description of the error.
Discussion ¶
The client controls the lifetime of the data value this method returns.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/data(withSMPTEResolution:error:)
func (AVAudioSequencer) HostTimeForBeatsError ¶
func (a AVAudioSequencer) HostTimeForBeatsError(inBeats AVMusicTimeStamp) (uint64, error)
Gets the host time the sequence plays at the specified position.
inBeats: The timestamp for the beat position.
outError: On exit, if an error occurs, a description of the error.
Discussion ¶
This call is valid when the player is in a playing state. It returns `0` with an error, otherwise, or if the starting position of the player is after the specified beat. The method uses the sequence’s tempo map to translate a beat time from the starting time and the beat of the player.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/hostTime(forBeats:error:)
func (AVAudioSequencer) Init ¶
func (a AVAudioSequencer) Init() AVAudioSequencer
Init initializes the instance.
func (AVAudioSequencer) InitWithAudioEngine ¶
func (a AVAudioSequencer) InitWithAudioEngine(engine IAVAudioEngine) AVAudioSequencer
Creates an audio sequencer that the framework attaches to an audio engine instance.
engine: The engine to attach to.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/init(audioEngine:)
func (AVAudioSequencer) LoadFromDataOptionsError ¶
func (a AVAudioSequencer) LoadFromDataOptionsError(data foundation.INSData, options AVMusicSequenceLoadOptions) (bool, error)
Parses the data and adds its events to the sequence.
data: The data to load from.
options: Determines how the contents map to the tracks inside the sequence.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/load(from:options:)-8o58w
func (AVAudioSequencer) LoadFromURLOptionsError ¶
func (a AVAudioSequencer) LoadFromURLOptionsError(fileURL foundation.INSURL, options AVMusicSequenceLoadOptions) (bool, error)
Loads the file the URL references and adds the events to the sequence.
fileURL: The URL to the file.
options: Determines how the contents map to the tracks inside the sequence.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/load(from:options:)-9kb6m
func (AVAudioSequencer) Playing ¶
func (a AVAudioSequencer) Playing() bool
A Boolean value that indicates whether the sequencer’s player is in a playing state.
Discussion ¶
This value returns true if the sequencer’s player is in a started state. The framework considers it to be playing until it explicitly stops, including when playing past the end of the events in a sequence.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/isPlaying
func (AVAudioSequencer) PrepareToPlay ¶
func (a AVAudioSequencer) PrepareToPlay()
Gets ready to play the sequence by prerolling all events.
Discussion ¶
The framework invokes this method automatically on play if you don’t call it, but it may delay startup.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/prepareToPlay()
func (AVAudioSequencer) Rate ¶
func (a AVAudioSequencer) Rate() float32
The playback rate of the sequencer’s player.
Discussion ¶
The default playback rate is `1.0`, and must be greater than `0.0`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/rate
func (AVAudioSequencer) RemoveTrack ¶
func (a AVAudioSequencer) RemoveTrack(track IAVMusicTrack) bool
Removes the music track from the sequencer.
track: The music track to remove.
Return Value ¶
A Boolean value that indicates whether the call succeeds.
Discussion ¶
This method doesn’t destroy the method track since you can reuse it.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/removeTrack(_:)
func (AVAudioSequencer) ReverseEvents ¶
func (a AVAudioSequencer) ReverseEvents()
Reverses the order of all events in all music tracks, including the tempo track.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/reverseEvents()
func (AVAudioSequencer) SecondsForBeats ¶
func (a AVAudioSequencer) SecondsForBeats(beats AVMusicTimeStamp) float64
Gets the time for the specified beat position (timestamp) in the track, in seconds.
beats: The timestamp for the beat position.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/seconds(forBeats:)
func (AVAudioSequencer) SetAVMusicTimeStampEndOfTrack ¶
func (a AVAudioSequencer) SetAVMusicTimeStampEndOfTrack(value float64)
func (AVAudioSequencer) SetCurrentPositionInBeats ¶
func (a AVAudioSequencer) SetCurrentPositionInBeats(value float64)
func (AVAudioSequencer) SetCurrentPositionInSeconds ¶
func (a AVAudioSequencer) SetCurrentPositionInSeconds(value float64)
func (AVAudioSequencer) SetRate ¶
func (a AVAudioSequencer) SetRate(value float32)
func (AVAudioSequencer) SetUserCallback ¶
func (a AVAudioSequencer) SetUserCallback(userCallback AVAudioSequencerUserCallback)
Adds a callback that the sequencer calls each time it encounters a user event during playback.
userCallback: The user callback that the system calls.
Discussion ¶
The system calls the same callback for events that occur on any track in the sequencer. Set the callback to `nil` to disable it.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/setUserCallback(_:)
func (AVAudioSequencer) StartAndReturnError ¶
func (a AVAudioSequencer) StartAndReturnError() (bool, error)
Starts the sequencer’s player.
Discussion ¶
If you don’t call [PrepareToPlay], the framework calls it and then starts the player.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/start()
func (AVAudioSequencer) Stop ¶
func (a AVAudioSequencer) Stop()
Stops the sequencer’s player.
Discussion ¶
Stopping the player leaves it in an unprerolled state, but stores the playback position so that a subsequent call to [StartAndReturnError] resumes where it stops. This action doesn’t stop an audio engine you associate with it.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/stop()
func (AVAudioSequencer) TempoTrack ¶
func (a AVAudioSequencer) TempoTrack() IAVMusicTrack
The track that contains tempo information about the sequence.
Discussion ¶
Each sequence has a single tempo track. The framework places all tempo events into this track along with other appropriate events, such as the time signature from a MIDI file.
You can edit the tempo track like any other track. The framework ignores nontempo events in the track.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/tempoTrack
func (AVAudioSequencer) Tracks ¶
func (a AVAudioSequencer) Tracks() []AVMusicTrack
An array that contains all the tracks in the sequence.
Discussion ¶
The track indices start at `0`, and don’t include the tempo track.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/tracks
func (AVAudioSequencer) UserInfo ¶
func (a AVAudioSequencer) UserInfo() foundation.INSDictionary
A dictionary that contains metadata from a sequence.
Discussion ¶
This property contains one or more of the values from AVAudioSequencerInfoDictionaryKey.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/userInfo
func (AVAudioSequencer) WriteToURLSMPTEResolutionReplaceExistingError ¶
func (a AVAudioSequencer) WriteToURLSMPTEResolutionReplaceExistingError(fileURL foundation.INSURL, resolution int, replace bool) (bool, error)
Creates and writes a MIDI file from the events in the sequence.
fileURL: The URL of the file you want to write to.
resolution: The relationship between tick and quarter note for saving to a Standard MIDI File. Passing zero uses the default value set using the tempo track.
replace: When `true`, the framework overwrites an existing file at `fileURL`. Otherwise, the call fails with a permission error if a file at the specified path exists.
Discussion ¶
The framework writes only MIDI events when writing to the MIDI file. MIDI files are normally beat-based, but can also have an SMPTE (or real-time, rather than beat time) representation. The relationship between tick and quarter note for saving to a Standard MIDI File is the current value for the tempo track.
type AVAudioSequencerClass ¶
type AVAudioSequencerClass struct {
// contains filtered or unexported fields
}
func GetAVAudioSequencerClass ¶
func GetAVAudioSequencerClass() AVAudioSequencerClass
GetAVAudioSequencerClass returns the class object for AVAudioSequencer.
func (AVAudioSequencerClass) Alloc ¶
func (ac AVAudioSequencerClass) Alloc() AVAudioSequencer
Alloc allocates memory for a new instance of the class.
func (AVAudioSequencerClass) Class ¶
func (ac AVAudioSequencerClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioSequencerInfoDictionaryKey ¶
type AVAudioSequencerInfoDictionaryKey = string
AVAudioSequencerInfoDictionaryKey is constants that defines metadata keys for a sequencer.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer/InfoDictionaryKey
type AVAudioSequencerUserCallback ¶
type AVAudioSequencerUserCallback = func(AVMusicTrack, foundation.NSData, float64)
AVAudioSequencerUserCallback is a callback the sequencer calls asynchronously during playback when it encounters a user event.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencerUserCallback
type AVAudioSessionActivationOptions ¶
type AVAudioSessionActivationOptions uint
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSessionActivationOptions
const ( // AVAudioSessionActivationOptionNone: A value that indicates the system should activate the audio session with no options. AVAudioSessionActivationOptionNone AVAudioSessionActivationOptions = 0 )
func (AVAudioSessionActivationOptions) String ¶
func (e AVAudioSessionActivationOptions) String() string
type AVAudioSessionAnchoringStrategy ¶
type AVAudioSessionAnchoringStrategy int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSessionAnchoringStrategy
const ( AVAudioSessionAnchoringStrategyAutomatic AVAudioSessionAnchoringStrategy = 0 AVAudioSessionAnchoringStrategyFront AVAudioSessionAnchoringStrategy = 0 AVAudioSessionAnchoringStrategyScene AVAudioSessionAnchoringStrategy = 0 )
func (AVAudioSessionAnchoringStrategy) String ¶
func (e AVAudioSessionAnchoringStrategy) String() string
type AVAudioSessionCapability ¶
type AVAudioSessionCapability struct {
objectivec.Object
}
Describes whether a specific capability is supported and if that capability is currently enabled
Inspecting a capability ¶
- AVAudioSessionCapability.Enabled: A Boolean value that indicates whether the capability is enabled.
- AVAudioSessionCapability.Supported: A Boolean value that indicates whether the capability is supported.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSessionCapability
func AVAudioSessionCapabilityFromID ¶
func AVAudioSessionCapabilityFromID(id objc.ID) AVAudioSessionCapability
AVAudioSessionCapabilityFromID constructs a AVAudioSessionCapability from an objc.ID.
Describes whether a specific capability is supported and if that capability is currently enabled
func NewAVAudioSessionCapability ¶
func NewAVAudioSessionCapability() AVAudioSessionCapability
NewAVAudioSessionCapability creates a new AVAudioSessionCapability instance.
func (AVAudioSessionCapability) Autorelease ¶
func (a AVAudioSessionCapability) Autorelease() AVAudioSessionCapability
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioSessionCapability) BluetoothMicrophoneExtension ¶
func (a AVAudioSessionCapability) BluetoothMicrophoneExtension() objc.ID
An optional port extension that describes capabilities relevant to Bluetooth microphone ports.
func (AVAudioSessionCapability) Enabled ¶
func (a AVAudioSessionCapability) Enabled() bool
A Boolean value that indicates whether the capability is enabled.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSessionCapability/isEnabled
func (AVAudioSessionCapability) Init ¶
func (a AVAudioSessionCapability) Init() AVAudioSessionCapability
Init initializes the instance.
func (AVAudioSessionCapability) SetBluetoothMicrophoneExtension ¶
func (a AVAudioSessionCapability) SetBluetoothMicrophoneExtension(value objc.ID)
func (AVAudioSessionCapability) Supported ¶
func (a AVAudioSessionCapability) Supported() bool
A Boolean value that indicates whether the capability is supported.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSessionCapability/isSupported
type AVAudioSessionCapabilityClass ¶
type AVAudioSessionCapabilityClass struct {
// contains filtered or unexported fields
}
func GetAVAudioSessionCapabilityClass ¶
func GetAVAudioSessionCapabilityClass() AVAudioSessionCapabilityClass
GetAVAudioSessionCapabilityClass returns the class object for AVAudioSessionCapability.
func (AVAudioSessionCapabilityClass) Alloc ¶
func (ac AVAudioSessionCapabilityClass) Alloc() AVAudioSessionCapability
Alloc allocates memory for a new instance of the class.
func (AVAudioSessionCapabilityClass) Class ¶
func (ac AVAudioSessionCapabilityClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioSessionCategoryOptions ¶
type AVAudioSessionCategoryOptions uint
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/CategoryOptions-swift.struct
const ( // AVAudioSessionCategoryOptionAllowAirPlay: An option that determines whether you can stream audio from this session to AirPlay devices. AVAudioSessionCategoryOptionAllowAirPlay AVAudioSessionCategoryOptions = 64 // AVAudioSessionCategoryOptionAllowBluetoothA2DP: An option that determines whether you can stream audio from this session to Bluetooth devices that support the Advanced Audio Distribution Profile (A2DP). AVAudioSessionCategoryOptionAllowBluetoothA2DP AVAudioSessionCategoryOptions = 32 // AVAudioSessionCategoryOptionAllowBluetoothHFP: An option that makes Bluetooth Hands-Free Profile (HFP) devices available for audio input. AVAudioSessionCategoryOptionAllowBluetoothHFP AVAudioSessionCategoryOptions = 4 // AVAudioSessionCategoryOptionBluetoothHighQualityRecording: An option that indicates to enable high-quality audio for input and output routes. AVAudioSessionCategoryOptionBluetoothHighQualityRecording AVAudioSessionCategoryOptions = 524288 // AVAudioSessionCategoryOptionDefaultToSpeaker: An option that determines whether audio from the session defaults to the built-in speaker instead of the receiver. AVAudioSessionCategoryOptionDefaultToSpeaker AVAudioSessionCategoryOptions = 8 // AVAudioSessionCategoryOptionDuckOthers: An option that reduces the volume of other audio sessions while audio from this session plays. AVAudioSessionCategoryOptionDuckOthers AVAudioSessionCategoryOptions = 2 // AVAudioSessionCategoryOptionFarFieldInput: This option should be used if a session prefers to use FarFieldInput when available. AVAudioSessionCategoryOptionFarFieldInput AVAudioSessionCategoryOptions = 262144 // AVAudioSessionCategoryOptionInterruptSpokenAudioAndMixWithOthers: An option that determines whether to pause spoken audio content from other sessions when your app plays its audio. AVAudioSessionCategoryOptionInterruptSpokenAudioAndMixWithOthers AVAudioSessionCategoryOptions = 17 // AVAudioSessionCategoryOptionMixWithOthers: An option that indicates whether audio from this session mixes with audio from active sessions in other audio apps. AVAudioSessionCategoryOptionMixWithOthers AVAudioSessionCategoryOptions = 1 // AVAudioSessionCategoryOptionOverrideMutedMicrophoneInterruption: An option that indicates whether the system interrupts the audio session when it mutes the built-in microphone. AVAudioSessionCategoryOptionOverrideMutedMicrophoneInterruption AVAudioSessionCategoryOptions = 128 // Deprecated. AVAudioSessionCategoryOptionAllowBluetooth AVAudioSessionCategoryOptions = 4 )
func (AVAudioSessionCategoryOptions) String ¶
func (e AVAudioSessionCategoryOptions) String() string
type AVAudioSessionErrorCode ¶
type AVAudioSessionErrorCode int
const ( AVAudioSessionErrorCodeBadParam AVAudioSessionErrorCode = -50 AVAudioSessionErrorCodeCannotInterruptOthers AVAudioSessionErrorCode = '!'<<24 | 'i'<<16 | 'n'<<8 | 't' // '!int' AVAudioSessionErrorCodeCannotStartPlaying AVAudioSessionErrorCode = '!'<<24 | 'p'<<16 | 'l'<<8 | 'a' // '!pla' AVAudioSessionErrorCodeCannotStartRecording AVAudioSessionErrorCode = '!'<<24 | 'r'<<16 | 'e'<<8 | 'c' // '!rec' AVAudioSessionErrorCodeExpiredSession AVAudioSessionErrorCode = '!'<<24 | 's'<<16 | 'e'<<8 | 's' // '!ses' AVAudioSessionErrorCodeIncompatibleCategory AVAudioSessionErrorCode = '!'<<24 | 'c'<<16 | 'a'<<8 | 't' // '!cat' AVAudioSessionErrorCodeInsufficientPriority AVAudioSessionErrorCode = '!'<<24 | 'p'<<16 | 'r'<<8 | 'i' // '!pri' AVAudioSessionErrorCodeIsBusy AVAudioSessionErrorCode = '!'<<24 | 'a'<<16 | 'c'<<8 | 't' // '!act' AVAudioSessionErrorCodeMediaServicesFailed AVAudioSessionErrorCode = 'm'<<24 | 's'<<16 | 'r'<<8 | 'v' // 'msrv' AVAudioSessionErrorCodeMissingEntitlement AVAudioSessionErrorCode = 'e'<<24 | 'n'<<16 | 't'<<8 | '?' // 'ent?' AVAudioSessionErrorCodeNone AVAudioSessionErrorCode = 0 AVAudioSessionErrorCodeResourceNotAvailable AVAudioSessionErrorCode = '!'<<24 | 'r'<<16 | 'e'<<8 | 's' // '!res' AVAudioSessionErrorCodeSessionNotActive AVAudioSessionErrorCode = 'i'<<24 | 'n'<<16 | 'a'<<8 | 'c' // 'inac' AVAudioSessionErrorCodeSiriIsRecording AVAudioSessionErrorCode = 's'<<24 | 'i'<<16 | 'r'<<8 | 'i' // 'siri' AVAudioSessionErrorCodeUnspecified AVAudioSessionErrorCode = 'w'<<24 | 'h'<<16 | 'a'<<8 | 't' // 'what' )
func (AVAudioSessionErrorCode) String ¶
func (e AVAudioSessionErrorCode) String() string
type AVAudioSessionIOType ¶
type AVAudioSessionIOType uint
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/IOType
const ( // AVAudioSessionIOTypeAggregated: An I/O type that indicates if audio input and output should be presented in the same realtime I/O callback. AVAudioSessionIOTypeAggregated AVAudioSessionIOType = 1 // AVAudioSessionIOTypeNotSpecified: The default audio session I/O type. AVAudioSessionIOTypeNotSpecified AVAudioSessionIOType = 0 )
func (AVAudioSessionIOType) String ¶
func (e AVAudioSessionIOType) String() string
type AVAudioSessionInterruptionOptions ¶
type AVAudioSessionInterruptionOptions uint
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/InterruptionOptions
const ( // AVAudioSessionInterruptionOptionShouldResume: An option that indicates the interruption by another audio session has ended and the app can resume its audio session. AVAudioSessionInterruptionOptionShouldResume AVAudioSessionInterruptionOptions = 1 )
func (AVAudioSessionInterruptionOptions) String ¶
func (e AVAudioSessionInterruptionOptions) String() string
type AVAudioSessionInterruptionReason ¶
type AVAudioSessionInterruptionReason uint
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/InterruptionReason
const ( // AVAudioSessionInterruptionReasonBuiltInMicMuted: The system interrupts the audio session when the device mutes the built-in microphone. AVAudioSessionInterruptionReasonBuiltInMicMuted AVAudioSessionInterruptionReason = 2 // AVAudioSessionInterruptionReasonDefault: The system interrupts this audio session when it activates another. AVAudioSessionInterruptionReasonDefault AVAudioSessionInterruptionReason = 0 AVAudioSessionInterruptionReasonDeviceUnauthenticated AVAudioSessionInterruptionReason = 0 // AVAudioSessionInterruptionReasonRouteDisconnected: The system interrupts the audio session due to a disconnection of an audio route. AVAudioSessionInterruptionReasonRouteDisconnected AVAudioSessionInterruptionReason = 4 // AVAudioSessionInterruptionReasonSceneWasBackgrounded: The system backgrounds the scene and interrupts the audio session. AVAudioSessionInterruptionReasonSceneWasBackgrounded AVAudioSessionInterruptionReason = 0 // Deprecated. AVAudioSessionInterruptionReasonAppWasSuspended AVAudioSessionInterruptionReason = 1 )
func (AVAudioSessionInterruptionReason) String ¶
func (e AVAudioSessionInterruptionReason) String() string
type AVAudioSessionInterruptionType ¶
type AVAudioSessionInterruptionType uint
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/InterruptionType
const ( // AVAudioSessionInterruptionTypeBegan: A type that indicates that the operating system began interrupting the audio session. AVAudioSessionInterruptionTypeBegan AVAudioSessionInterruptionType = 1 // AVAudioSessionInterruptionTypeEnded: A type that indicates that the operating system ended interrupting the audio session. AVAudioSessionInterruptionTypeEnded AVAudioSessionInterruptionType = 0 )
func (AVAudioSessionInterruptionType) String ¶
func (e AVAudioSessionInterruptionType) String() string
type AVAudioSessionMicrophoneInjectionMode ¶
type AVAudioSessionMicrophoneInjectionMode int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/MicrophoneInjectionMode
const ( // AVAudioSessionMicrophoneInjectionModeNone: A mode that indicates not to use spoken audio injection. AVAudioSessionMicrophoneInjectionModeNone AVAudioSessionMicrophoneInjectionMode = 0 // AVAudioSessionMicrophoneInjectionModeSpokenAudio: A mode that indicates to inject spoken audio, like synthesized speech, along with microphone audio. AVAudioSessionMicrophoneInjectionModeSpokenAudio AVAudioSessionMicrophoneInjectionMode = 1 )
func (AVAudioSessionMicrophoneInjectionMode) String ¶
func (e AVAudioSessionMicrophoneInjectionMode) String() string
type AVAudioSessionPortOverride ¶
type AVAudioSessionPortOverride uint
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/PortOverride
const ( // AVAudioSessionPortOverrideNone: A value that indicates not to override the output audio port. AVAudioSessionPortOverrideNone AVAudioSessionPortOverride = 0 // AVAudioSessionPortOverrideSpeaker: A value that indicates to override the current inputs and outputs, and route audio to the built-in speaker and microphone. AVAudioSessionPortOverrideSpeaker AVAudioSessionPortOverride = 's'<<24 | 'p'<<16 | 'k'<<8 | 'r' // 'spkr' )
func (AVAudioSessionPortOverride) String ¶
func (e AVAudioSessionPortOverride) String() string
type AVAudioSessionPromptStyle ¶
type AVAudioSessionPromptStyle uint
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/PromptStyle-swift.enum
const ( // AVAudioSessionPromptStyleNone: Your app shouldn’t issue prompts at this time. AVAudioSessionPromptStyleNone AVAudioSessionPromptStyle = 'n'<<24 | 'o'<<16 | 'n'<<8 | 'e' // 'none' // AVAudioSessionPromptStyleNormal: Your app may use long, verbal prompts. AVAudioSessionPromptStyleNormal AVAudioSessionPromptStyle = 'n'<<24 | 'r'<<16 | 'm'<<8 | 'l' // 'nrml' // AVAudioSessionPromptStyleShort: Your app should issue short, nonverbal prompts. AVAudioSessionPromptStyleShort AVAudioSessionPromptStyle = 's'<<24 | 'h'<<16 | 'r'<<8 | 't' // 'shrt' )
func (AVAudioSessionPromptStyle) String ¶
func (e AVAudioSessionPromptStyle) String() string
type AVAudioSessionRecordPermission ¶
type AVAudioSessionRecordPermission uint
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/RecordPermission-swift.enum
const ( // Deprecated. AVAudioSessionRecordPermissionDenied AVAudioSessionRecordPermission = 'd'<<24 | 'e'<<16 | 'n'<<8 | 'y' // 'deny' // Deprecated. AVAudioSessionRecordPermissionGranted AVAudioSessionRecordPermission = 'g'<<24 | 'r'<<16 | 'n'<<8 | 't' // 'grnt' // Deprecated. AVAudioSessionRecordPermissionUndetermined AVAudioSessionRecordPermission = 'u'<<24 | 'n'<<16 | 'd'<<8 | 't' // 'undt' )
func (AVAudioSessionRecordPermission) String ¶
func (e AVAudioSessionRecordPermission) String() string
type AVAudioSessionRenderingMode ¶
type AVAudioSessionRenderingMode int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/RenderingMode-swift.enum
const ( // AVAudioSessionRenderingModeDolbyAtmos: A mode that represents Dolby Atmos. AVAudioSessionRenderingModeDolbyAtmos AVAudioSessionRenderingMode = 5 // AVAudioSessionRenderingModeDolbyAudio: A mode that represents Dolby audio. AVAudioSessionRenderingModeDolbyAudio AVAudioSessionRenderingMode = 4 // AVAudioSessionRenderingModeMonoStereo: A mode that represents non multi-channel audio. AVAudioSessionRenderingModeMonoStereo AVAudioSessionRenderingMode = 1 // AVAudioSessionRenderingModeNotApplicable: A mode that represents there’s no asset in a loading or playing state. AVAudioSessionRenderingModeNotApplicable AVAudioSessionRenderingMode = 0 // AVAudioSessionRenderingModeSpatialAudio: A mode that represents a fallback for when hardware capabilities don’t support Dolby. AVAudioSessionRenderingModeSpatialAudio AVAudioSessionRenderingMode = 3 // AVAudioSessionRenderingModeSurround: A mode that represents general multi-channel audio. AVAudioSessionRenderingModeSurround AVAudioSessionRenderingMode = 2 )
func (AVAudioSessionRenderingMode) String ¶
func (e AVAudioSessionRenderingMode) String() string
type AVAudioSessionRouteChangeReason ¶
type AVAudioSessionRouteChangeReason uint
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/RouteChangeReason
const ( // AVAudioSessionRouteChangeReasonCategoryChange: A value that indicates that the category of the session object changed. AVAudioSessionRouteChangeReasonCategoryChange AVAudioSessionRouteChangeReason = 3 // AVAudioSessionRouteChangeReasonNewDeviceAvailable: A value that indicates a user action, such as plugging in a headset, has made a preferred audio route available. AVAudioSessionRouteChangeReasonNewDeviceAvailable AVAudioSessionRouteChangeReason = 1 // AVAudioSessionRouteChangeReasonNoSuitableRouteForCategory: A value that indicates that the route changed because no suitable route is now available for the specified category. AVAudioSessionRouteChangeReasonNoSuitableRouteForCategory AVAudioSessionRouteChangeReason = 7 AVAudioSessionRouteChangeReasonOldDeviceUnavailable AVAudioSessionRouteChangeReason = 2 // AVAudioSessionRouteChangeReasonOverride: A value that indicates that the output route was overridden by the app. AVAudioSessionRouteChangeReasonOverride AVAudioSessionRouteChangeReason = 4 // AVAudioSessionRouteChangeReasonRouteConfigurationChange: A value that indicates that the configuration for a set of I/O ports has changed. AVAudioSessionRouteChangeReasonRouteConfigurationChange AVAudioSessionRouteChangeReason = 8 // AVAudioSessionRouteChangeReasonUnknown: A value that indicates the reason for the change is unknown. AVAudioSessionRouteChangeReasonUnknown AVAudioSessionRouteChangeReason = 0 // AVAudioSessionRouteChangeReasonWakeFromSleep: A value that indicates that the route changed when the device woke up from sleep. AVAudioSessionRouteChangeReasonWakeFromSleep AVAudioSessionRouteChangeReason = 6 )
func (AVAudioSessionRouteChangeReason) String ¶
func (e AVAudioSessionRouteChangeReason) String() string
type AVAudioSessionRouteSelection ¶
type AVAudioSessionRouteSelection int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/RouteSelection
type AVAudioSessionRouteSharingPolicy ¶
type AVAudioSessionRouteSharingPolicy uint
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/RouteSharingPolicy-swift.enum
const ( // AVAudioSessionRouteSharingPolicyDefault: A policy that follows standard rules for routing audio output. AVAudioSessionRouteSharingPolicyDefault AVAudioSessionRouteSharingPolicy = 0 // AVAudioSessionRouteSharingPolicyIndependent: A policy in which the route picker UI directs videos to a wireless route. AVAudioSessionRouteSharingPolicyIndependent AVAudioSessionRouteSharingPolicy = 2 // AVAudioSessionRouteSharingPolicyLongFormAudio: A policy that routes output to the shared long-form audio output. AVAudioSessionRouteSharingPolicyLongFormAudio AVAudioSessionRouteSharingPolicy = 1 // AVAudioSessionRouteSharingPolicyLongFormVideo: A policy that routes output to the shared long-form video output. AVAudioSessionRouteSharingPolicyLongFormVideo AVAudioSessionRouteSharingPolicy = 3 // Deprecated. AVAudioSessionRouteSharingPolicyLongForm AVAudioSessionRouteSharingPolicy = 1 )
func (AVAudioSessionRouteSharingPolicy) String ¶
func (e AVAudioSessionRouteSharingPolicy) String() string
type AVAudioSessionSetActiveOptions ¶
type AVAudioSessionSetActiveOptions uint
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/SetActiveOptions
const ( // AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation: An option that indicates that the system should notify other apps that you’ve deactivated your app’s audio session. AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation AVAudioSessionSetActiveOptions = 1 )
func (AVAudioSessionSetActiveOptions) String ¶
func (e AVAudioSessionSetActiveOptions) String() string
type AVAudioSessionSilenceSecondaryAudioHintType ¶
type AVAudioSessionSilenceSecondaryAudioHintType uint
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/SilenceSecondaryAudioHintType
const ( // AVAudioSessionSilenceSecondaryAudioHintTypeBegin: A value that indicates that another application’s primary audio has started. AVAudioSessionSilenceSecondaryAudioHintTypeBegin AVAudioSessionSilenceSecondaryAudioHintType = 1 // AVAudioSessionSilenceSecondaryAudioHintTypeEnd: A value that indicates that another application’s primary audio has stopped. AVAudioSessionSilenceSecondaryAudioHintTypeEnd AVAudioSessionSilenceSecondaryAudioHintType = 0 )
func (AVAudioSessionSilenceSecondaryAudioHintType) String ¶
func (e AVAudioSessionSilenceSecondaryAudioHintType) String() string
type AVAudioSessionSoundStageSize ¶
type AVAudioSessionSoundStageSize int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/SoundStageSize
const ( // AVAudioSessionSoundStageSizeAutomatic: The system sets the sound stage size. AVAudioSessionSoundStageSizeAutomatic AVAudioSessionSoundStageSize = 0 // AVAudioSessionSoundStageSizeLarge: A large sound stage. AVAudioSessionSoundStageSizeLarge AVAudioSessionSoundStageSize = 0 // AVAudioSessionSoundStageSizeMedium: A medium sound stage. AVAudioSessionSoundStageSizeMedium AVAudioSessionSoundStageSize = 0 // AVAudioSessionSoundStageSizeSmall: A small sound stage. AVAudioSessionSoundStageSizeSmall AVAudioSessionSoundStageSize = 0 )
func (AVAudioSessionSoundStageSize) String ¶
func (e AVAudioSessionSoundStageSize) String() string
type AVAudioSessionSpatialExperience ¶
type AVAudioSessionSpatialExperience int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSessionSpatialExperience-c.enum
const ( AVAudioSessionSpatialExperienceBypassed AVAudioSessionSpatialExperience = 0 AVAudioSessionSpatialExperienceFixed AVAudioSessionSpatialExperience = 0 AVAudioSessionSpatialExperienceHeadTracked AVAudioSessionSpatialExperience = 0 )
func (AVAudioSessionSpatialExperience) String ¶
func (e AVAudioSessionSpatialExperience) String() string
type AVAudioSinkNode ¶
type AVAudioSinkNode struct {
AVAudioNode
}
An object that receives audio data.
Overview ¶
You use an AVAudioSinkNode to receive audio data through AVAudioSinkNodeReceiverBlock. You only use it in the input chain.
An audio sink node doesn’t support format conversion. When connecting, use the output format of the input for the format for the connection. The format should match the hardware input sample rate.
The voice processing I/O unit is an exception to the above because it supports sample rate conversion. The input scope format (hardware format) and output scope format (client format) of the input node can differ in that case.
An audio sink node doesn’t support manual rendering mode, and doesn’t have an output bus, so you can’t install a tap on it.
Creating an Audio Sink Node ¶
- AVAudioSinkNode.InitWithReceiverBlock: Creates an audio sink node with a block that receives audio data.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSinkNode
func AVAudioSinkNodeFromID ¶
func AVAudioSinkNodeFromID(id objc.ID) AVAudioSinkNode
AVAudioSinkNodeFromID constructs a AVAudioSinkNode from an objc.ID.
An object that receives audio data.
func NewAVAudioSinkNode ¶
func NewAVAudioSinkNode() AVAudioSinkNode
NewAVAudioSinkNode creates a new AVAudioSinkNode instance.
func NewAudioSinkNodeWithReceiverBlock ¶
func NewAudioSinkNodeWithReceiverBlock(block AVAudioSinkNodeReceiverBlock) AVAudioSinkNode
Creates an audio sink node with a block that receives audio data.
block: The block that receives audio data from the input.
Discussion ¶
When connecting the audio sink node to another node, the system uses the connection format to set the audio format for the input bus.
The system calls the block on the real-time thread when receiving input data. Avoid making blocking calls within the block.
When receiving data, the system sets the audio format using the node’s input format.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSinkNode/init(receiverBlock:)
func (AVAudioSinkNode) Autorelease ¶
func (a AVAudioSinkNode) Autorelease() AVAudioSinkNode
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioSinkNode) Init ¶
func (a AVAudioSinkNode) Init() AVAudioSinkNode
Init initializes the instance.
func (AVAudioSinkNode) InitWithReceiverBlock ¶
func (a AVAudioSinkNode) InitWithReceiverBlock(block AVAudioSinkNodeReceiverBlock) AVAudioSinkNode
Creates an audio sink node with a block that receives audio data.
block: The block that receives audio data from the input.
Discussion ¶
When connecting the audio sink node to another node, the system uses the connection format to set the audio format for the input bus.
The system calls the block on the real-time thread when receiving input data. Avoid making blocking calls within the block.
When receiving data, the system sets the audio format using the node’s input format.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSinkNode/init(receiverBlock:)
type AVAudioSinkNodeClass ¶
type AVAudioSinkNodeClass struct {
// contains filtered or unexported fields
}
func GetAVAudioSinkNodeClass ¶
func GetAVAudioSinkNodeClass() AVAudioSinkNodeClass
GetAVAudioSinkNodeClass returns the class object for AVAudioSinkNode.
func (AVAudioSinkNodeClass) Alloc ¶
func (ac AVAudioSinkNodeClass) Alloc() AVAudioSinkNode
Alloc allocates memory for a new instance of the class.
func (AVAudioSinkNodeClass) Class ¶
func (ac AVAudioSinkNodeClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioSinkNodeReceiverBlock ¶
type AVAudioSinkNodeReceiverBlock = func(objectivec.IObject, uint32, objectivec.IObject) int
AVAudioSinkNodeReceiverBlock is a block that receives audio data from an audio sink node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSinkNodeReceiverBlock
type AVAudioSourceNode ¶
type AVAudioSourceNode struct {
AVAudioNode
}
An object that supplies audio data.
Overview ¶
The AVAudioSourceNode class allows for supplying audio data for rendering through AVAudioSourceNodeRenderBlock. It’s a convenient method for delievering audio data instead of setting the input callback on an audio unit with `kAudioUnitProperty_SetRenderCallback`.
Creating an Audio Source Node ¶
- AVAudioSourceNode.InitWithRenderBlock: Creates an audio source node with a block that supplies audio data.
- AVAudioSourceNode.InitWithFormatRenderBlock: Creates an audio source node with the audio format and a block that supplies audio data.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSourceNode
func AVAudioSourceNodeFromID ¶
func AVAudioSourceNodeFromID(id objc.ID) AVAudioSourceNode
AVAudioSourceNodeFromID constructs a AVAudioSourceNode from an objc.ID.
An object that supplies audio data.
func NewAVAudioSourceNode ¶
func NewAVAudioSourceNode() AVAudioSourceNode
NewAVAudioSourceNode creates a new AVAudioSourceNode instance.
func NewAudioSourceNodeWithFormatRenderBlock ¶
func NewAudioSourceNodeWithFormatRenderBlock(format IAVAudioFormat, block AVAudioSourceNodeRenderBlock) AVAudioSourceNode
Creates an audio source node with the audio format and a block that supplies audio data.
format: The format of the pulse-code modulated (PCM) audio data the block supplies.
block: The block to supply audio data to the output.
Discussion ¶
When connecting the audio source node to another node, the system uses the connection format to set the audio format for the output bus.
Depending on the audio engine’s operating mode, call the block on real-time or nonreal-time threads. When rendering to a device, avoid making blocking calls within the block.
AVAudioSourceNode supports different audio formats for the block and the output, but the system only supports linear PCM conversions with sample rate, bit depth, and interleaving.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSourceNode/init(format:renderBlock:)
func NewAudioSourceNodeWithRenderBlock ¶
func NewAudioSourceNodeWithRenderBlock(block AVAudioSourceNodeRenderBlock) AVAudioSourceNode
Creates an audio source node with a block that supplies audio data.
block: The block to supply audio data to the output.
Discussion ¶
When connecting the audio source node to another node, the system uses the connection format to set the audio format for the output bus.
Depending on the audio engine’s operating mode, call the block on real-time or nonreal-time threads. When rendering to a device, avoid making blocking calls within the block.
The system sets the node’s output format using the audio format for the render block. When reconnecting the node with a different output format, the audio format for the block changes.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSourceNode/init(renderBlock:)
func (AVAudioSourceNode) Autorelease ¶
func (a AVAudioSourceNode) Autorelease() AVAudioSourceNode
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioSourceNode) DestinationForMixerBus ¶
func (a AVAudioSourceNode) DestinationForMixerBus(mixer IAVAudioNode, bus AVAudioNodeBus) IAVAudioMixingDestination
Gets the audio mixing destination object that corresponds to the specified mixer node and input bus.
mixer: The mixer to get destination details for.
bus: The input bus.
Return Value ¶
Returns `self` if the specified mixer or input bus matches its connection point. If the mixer or input bus doesn’t match its connection point, or if the source node isn’t in a connected state to the mixer or input bus, the method returns `nil.`
Discussion ¶
When you connect a source node to multiple mixers downstream, setting AVAudioMixing properties directly on the source node applies the change to all of them. Use this method to get the corresponding AVAudioMixingDestination for a specific mixer. Properties set on individual destination instances don’t reflect at the source node level.
If there’s any disconnection between the source and mixer nodes, the return value can be invalid. Fetch the return value every time you want to set or get properties on a specific mixer.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing/destination(forMixer:bus:)
func (AVAudioSourceNode) Init ¶
func (a AVAudioSourceNode) Init() AVAudioSourceNode
Init initializes the instance.
func (AVAudioSourceNode) InitWithFormatRenderBlock ¶
func (a AVAudioSourceNode) InitWithFormatRenderBlock(format IAVAudioFormat, block AVAudioSourceNodeRenderBlock) AVAudioSourceNode
Creates an audio source node with the audio format and a block that supplies audio data.
format: The format of the pulse-code modulated (PCM) audio data the block supplies.
block: The block to supply audio data to the output.
Discussion ¶
When connecting the audio source node to another node, the system uses the connection format to set the audio format for the output bus.
Depending on the audio engine’s operating mode, call the block on real-time or nonreal-time threads. When rendering to a device, avoid making blocking calls within the block.
AVAudioSourceNode supports different audio formats for the block and the output, but the system only supports linear PCM conversions with sample rate, bit depth, and interleaving.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSourceNode/init(format:renderBlock:)
func (AVAudioSourceNode) InitWithRenderBlock ¶
func (a AVAudioSourceNode) InitWithRenderBlock(block AVAudioSourceNodeRenderBlock) AVAudioSourceNode
Creates an audio source node with a block that supplies audio data.
block: The block to supply audio data to the output.
Discussion ¶
When connecting the audio source node to another node, the system uses the connection format to set the audio format for the output bus.
Depending on the audio engine’s operating mode, call the block on real-time or nonreal-time threads. When rendering to a device, avoid making blocking calls within the block.
The system sets the node’s output format using the audio format for the render block. When reconnecting the node with a different output format, the audio format for the block changes.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSourceNode/init(renderBlock:)
func (AVAudioSourceNode) Obstruction ¶
func (a AVAudioSourceNode) Obstruction() float32
A value that simulates filtering of the direct path of sound due to an obstacle.
Discussion ¶
The value of `obstruction` is in decibels. The system blocks only the direct path of sound between the source and listener.
The default value is `0.0`, and the range of valid values is `-100` to `0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/obstruction
func (AVAudioSourceNode) Occlusion ¶
func (a AVAudioSourceNode) Occlusion() float32
A value that simulates filtering of the direct and reverb paths of sound due to an obstacle.
Discussion ¶
The value of `obstruction` is in decibels. The system blocks the direct and reverb paths of sound between the source and listener.
The default value is `0.0`, and the range of valid values is `-100` to `0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/occlusion
func (AVAudioSourceNode) Pan ¶
func (a AVAudioSourceNode) Pan() float32
The bus’s stereo pan.
Discussion ¶
The default value is `0.0`, and the range of valid values is `-1.0` to `1.0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioStereoMixing/pan
func (AVAudioSourceNode) PointSourceInHeadMode ¶
func (a AVAudioSourceNode) PointSourceInHeadMode() AVAudio3DMixingPointSourceInHeadMode
The in-head mode for a point source.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/pointSourceInHeadMode
func (AVAudioSourceNode) Position ¶
func (a AVAudioSourceNode) Position() AVAudio3DPoint
The location of the source in the 3D environment.
Discussion ¶
The system specifies the coordinates in meters. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/position
func (AVAudioSourceNode) Rate ¶
func (a AVAudioSourceNode) Rate() float32
A value that changes the playback rate of the input signal.
Discussion ¶
A value of `2.0` results in the output audio playing one octave higher. A value of `0.5` results in the output audio playing one octave lower.
The default value is `1.0`, and the range of valid values is `0.5` to `2.0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/rate
func (AVAudioSourceNode) RenderingAlgorithm ¶
func (a AVAudioSourceNode) RenderingAlgorithm() AVAudio3DMixingRenderingAlgorithm
The type of rendering algorithm the mixer uses.
Discussion ¶
Depending on the current output format of the AVAudioEnvironmentNode instance, the system may only support a subset of the rendering algorithms. You can retrieve an array of valid rendering algorithms by calling the [ApplicableRenderingAlgorithms] function of the AVAudioEnvironmentNode instance.
The default rendering algorithm is [Audio3DMixingRenderingAlgorithmEqualPowerPanning]. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/renderingAlgorithm
func (AVAudioSourceNode) ReverbBlend ¶
func (a AVAudioSourceNode) ReverbBlend() float32
A value that controls the blend of dry and reverb processed audio.
Discussion ¶
This property controls the amount of the source’s audio that the AVAudioEnvironmentNode instance processes. A value of `0.5` results in an equal blend of dry and processed (wet) audio.
The default is `0.0`, and the range of valid values is `0.0` (completely dry) to `1.0` (completely wet). Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/reverbBlend
func (AVAudioSourceNode) SetObstruction ¶
func (a AVAudioSourceNode) SetObstruction(value float32)
func (AVAudioSourceNode) SetOcclusion ¶
func (a AVAudioSourceNode) SetOcclusion(value float32)
func (AVAudioSourceNode) SetPan ¶
func (a AVAudioSourceNode) SetPan(value float32)
func (AVAudioSourceNode) SetPointSourceInHeadMode ¶
func (a AVAudioSourceNode) SetPointSourceInHeadMode(value AVAudio3DMixingPointSourceInHeadMode)
func (AVAudioSourceNode) SetPosition ¶
func (a AVAudioSourceNode) SetPosition(value AVAudio3DPoint)
func (AVAudioSourceNode) SetRate ¶
func (a AVAudioSourceNode) SetRate(value float32)
func (AVAudioSourceNode) SetRenderingAlgorithm ¶
func (a AVAudioSourceNode) SetRenderingAlgorithm(value AVAudio3DMixingRenderingAlgorithm)
func (AVAudioSourceNode) SetReverbBlend ¶
func (a AVAudioSourceNode) SetReverbBlend(value float32)
func (AVAudioSourceNode) SetSourceMode ¶
func (a AVAudioSourceNode) SetSourceMode(value AVAudio3DMixingSourceMode)
func (AVAudioSourceNode) SetVolume ¶
func (a AVAudioSourceNode) SetVolume(value float32)
func (AVAudioSourceNode) SourceMode ¶
func (a AVAudioSourceNode) SourceMode() AVAudio3DMixingSourceMode
The source mode for the input bus of the audio environment node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/sourceMode
func (AVAudioSourceNode) Volume ¶
func (a AVAudioSourceNode) Volume() float32
The bus’s input volume.
Discussion ¶
The default value is `1.0`, and the range of valid values is `0.0` to `1.0`. Only the AVAudioEnvironmentNode and the AVAudioMixerNode implement this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing/volume
type AVAudioSourceNodeClass ¶
type AVAudioSourceNodeClass struct {
// contains filtered or unexported fields
}
func GetAVAudioSourceNodeClass ¶
func GetAVAudioSourceNodeClass() AVAudioSourceNodeClass
GetAVAudioSourceNodeClass returns the class object for AVAudioSourceNode.
func (AVAudioSourceNodeClass) Alloc ¶
func (ac AVAudioSourceNodeClass) Alloc() AVAudioSourceNode
Alloc allocates memory for a new instance of the class.
func (AVAudioSourceNodeClass) Class ¶
func (ac AVAudioSourceNodeClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioSourceNodeRenderBlock ¶
type AVAudioSourceNodeRenderBlock = func(*bool, objectivec.IObject, uint32, objectivec.IObject) int
AVAudioSourceNodeRenderBlock is a block that supplies audio data to an audio source node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSourceNodeRenderBlock
type AVAudioStereoMixing ¶
type AVAudioStereoMixing interface {
objectivec.IObject
// The bus’s stereo pan.
//
// See: https://developer.apple.com/documentation/AVFAudio/AVAudioStereoMixing/pan
Pan() float32
// The bus’s stereo pan.
//
// See: https://developer.apple.com/documentation/AVFAudio/AVAudioStereoMixing/pan
SetPan(value float32)
}
A protocol that defines stereo mixing properties a mixer uses.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioStereoMixing
type AVAudioStereoMixingObject ¶
type AVAudioStereoMixingObject struct {
objectivec.Object
}
AVAudioStereoMixingObject wraps an existing Objective-C object that conforms to the AVAudioStereoMixing protocol.
func AVAudioStereoMixingObjectFromID ¶
func AVAudioStereoMixingObjectFromID(id objc.ID) AVAudioStereoMixingObject
AVAudioStereoMixingObjectFromID constructs a AVAudioStereoMixingObject from an objc.ID. The object is determined to conform to the protocol at runtime.
func (AVAudioStereoMixingObject) BaseObject ¶
func (o AVAudioStereoMixingObject) BaseObject() objectivec.Object
func (AVAudioStereoMixingObject) Pan ¶
func (o AVAudioStereoMixingObject) Pan() float32
The bus’s stereo pan.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioStereoMixing/pan
func (AVAudioStereoMixingObject) SetPan ¶
func (o AVAudioStereoMixingObject) SetPan(value float32)
type AVAudioStereoOrientation ¶
type AVAudioStereoOrientation int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSession/StereoOrientation
const ( // AVAudioStereoOrientationLandscapeLeft: Audio capture should be horizontally oriented, with the USB-C or Lightning connector on the left. AVAudioStereoOrientationLandscapeLeft AVAudioStereoOrientation = 4 // AVAudioStereoOrientationLandscapeRight: Audio capture should be horizontally oriented, with the USB-C or Lightning connector on the right. AVAudioStereoOrientationLandscapeRight AVAudioStereoOrientation = 3 // AVAudioStereoOrientationNone: The audio session isn’t configured for stereo recording. AVAudioStereoOrientationNone AVAudioStereoOrientation = 0 // AVAudioStereoOrientationPortrait: Audio capture should be vertically oriented, with the USB-C or Lightning connector on the bottom. AVAudioStereoOrientationPortrait AVAudioStereoOrientation = 1 // AVAudioStereoOrientationPortraitUpsideDown: Audio capture should be vertically oriented, with the USB-C or Lightning connector on the top. AVAudioStereoOrientationPortraitUpsideDown AVAudioStereoOrientation = 2 )
func (AVAudioStereoOrientation) String ¶
func (e AVAudioStereoOrientation) String() string
type AVAudioTime ¶
type AVAudioTime struct {
objectivec.Object
}
An object you use to represent a moment in time.
Overview ¶
The AVAudioTime object represents a single moment in time in two ways:
- As host time, using the system’s basic clock with `mach_absolute_time()` - As audio samples at a particular sample rate
A single AVAudioTime instance contains either or both representations, meaning it might represent only a sample time, a host time, or both.
Instances of this class are immutable.
Creating an Audio Time Instance ¶
- AVAudioTime.InitWithAudioTimeStampSampleRate: Creates an audio time object with the specified timestamp and sample rate.
- AVAudioTime.InitWithHostTime: Creates an audio time object with the specified host time.
- AVAudioTime.InitWithHostTimeSampleTimeAtRate: Creates an audio time object with the specified host time, sample time, and sample rate.
- AVAudioTime.InitWithSampleTimeAtRate: Creates an audio time object with the specified timestamp and sample rate.
- AVAudioTime.ExtrapolateTimeFromAnchor: Creates an audio time object by converting between host time and sample time.
Manipulating Host Time ¶
- AVAudioTime.HostTime: The host time.
- AVAudioTime.HostTimeValid: A Boolean value that indicates whether the host time value is valid.
Getting Sample Rate Information ¶
- AVAudioTime.SampleRate: The sampling rate that the sample time property expresses.
- AVAudioTime.SampleTime: The time as a number of audio samples that the current audio device tracks.
- AVAudioTime.SampleTimeValid: A Boolean value that indicates whether the sample time and sample rate properties are in a valid state.
Getting the Core Audio Time Stamp ¶
- AVAudioTime.AudioTimeStamp: The time as an audio timestamp.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime
func AVAudioTimeFromID ¶
func AVAudioTimeFromID(id objc.ID) AVAudioTime
AVAudioTimeFromID constructs a AVAudioTime from an objc.ID.
An object you use to represent a moment in time.
func NewAVAudioTime ¶
func NewAVAudioTime() AVAudioTime
NewAVAudioTime creates a new AVAudioTime instance.
func NewAudioTimeWithAudioTimeStampSampleRate ¶
func NewAudioTimeWithAudioTimeStampSampleRate(ts objectivec.IObject, sampleRate float64) AVAudioTime
Creates an audio time object with the specified timestamp and sample rate.
ts: The timestamp.
sampleRate: The sample rate.
Return Value ¶
A new AVAudioTime instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime/init(audioTimeStamp:sampleRate:)
func NewAudioTimeWithHostTime ¶
func NewAudioTimeWithHostTime(hostTime uint64) AVAudioTime
Creates an audio time object with the specified host time.
hostTime: The host time.
Return Value ¶
A new AVAudioTime instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime/init(hostTime:)
func NewAudioTimeWithHostTimeSampleTimeAtRate ¶
func NewAudioTimeWithHostTimeSampleTimeAtRate(hostTime uint64, sampleTime AVAudioFramePosition, sampleRate float64) AVAudioTime
Creates an audio time object with the specified host time, sample time, and sample rate.
hostTime: The host time.
sampleTime: The sample time.
sampleRate: The sample rate.
Return Value ¶
A new AVAudioTime instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime/init(hostTime:sampleTime:atRate:)
func NewAudioTimeWithSampleTimeAtRate ¶
func NewAudioTimeWithSampleTimeAtRate(sampleTime AVAudioFramePosition, sampleRate float64) AVAudioTime
Creates an audio time object with the specified timestamp and sample rate.
sampleTime: The sample time.
sampleRate: The sample rate.
Return Value ¶
A new AVAudioTime instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime/init(sampleTime:atRate:)
func (AVAudioTime) AudioTimeStamp ¶
func (a AVAudioTime) AudioTimeStamp() objectivec.IObject
The time as an audio timestamp.
Discussion ¶
This is useful for compatibility with lower-level Core Audio and Audio Toolbox API.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime/audioTimeStamp
func (AVAudioTime) Autorelease ¶
func (a AVAudioTime) Autorelease() AVAudioTime
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioTime) ExtrapolateTimeFromAnchor ¶
func (a AVAudioTime) ExtrapolateTimeFromAnchor(anchorTime IAVAudioTime) IAVAudioTime
Creates an audio time object by converting between host time and sample time.
anchorTime: An audio time instance with a more complete timestamp than that of the receiver (self).
Return Value ¶
A new AVAudioTime instance.
Discussion ¶
If `anchorTime` is an AVAudioTime instance where both host time and sample time are valid, and the receiver is another timestamp where only one of the two is valid, this method returns a new AVAudioTime. It copies it from the receiver, where the anchor provides additional valid fields.
The `anchorTime` value must have a valid host time and sample time, and self must have sample rate and at least one valid host time or sample time. Otherwise, this method returns `nil`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime/extrapolateTime(fromAnchor:)
func (AVAudioTime) HostTime ¶
func (a AVAudioTime) HostTime() uint64
The host time.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime/hostTime
func (AVAudioTime) HostTimeValid ¶
func (a AVAudioTime) HostTimeValid() bool
A Boolean value that indicates whether the host time value is valid.
Discussion ¶
This property returns true if the [HostTime] property is valid; otherwise, it returns false.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime/isHostTimeValid
func (AVAudioTime) InitWithAudioTimeStampSampleRate ¶
func (a AVAudioTime) InitWithAudioTimeStampSampleRate(ts objectivec.IObject, sampleRate float64) AVAudioTime
Creates an audio time object with the specified timestamp and sample rate.
ts: The timestamp.
sampleRate: The sample rate.
Return Value ¶
A new AVAudioTime instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime/init(audioTimeStamp:sampleRate:)
func (AVAudioTime) InitWithHostTime ¶
func (a AVAudioTime) InitWithHostTime(hostTime uint64) AVAudioTime
Creates an audio time object with the specified host time.
hostTime: The host time.
Return Value ¶
A new AVAudioTime instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime/init(hostTime:)
func (AVAudioTime) InitWithHostTimeSampleTimeAtRate ¶
func (a AVAudioTime) InitWithHostTimeSampleTimeAtRate(hostTime uint64, sampleTime AVAudioFramePosition, sampleRate float64) AVAudioTime
Creates an audio time object with the specified host time, sample time, and sample rate.
hostTime: The host time.
sampleTime: The sample time.
sampleRate: The sample rate.
Return Value ¶
A new AVAudioTime instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime/init(hostTime:sampleTime:atRate:)
func (AVAudioTime) InitWithSampleTimeAtRate ¶
func (a AVAudioTime) InitWithSampleTimeAtRate(sampleTime AVAudioFramePosition, sampleRate float64) AVAudioTime
Creates an audio time object with the specified timestamp and sample rate.
sampleTime: The sample time.
sampleRate: The sample rate.
Return Value ¶
A new AVAudioTime instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime/init(sampleTime:atRate:)
func (AVAudioTime) SampleRate ¶
func (a AVAudioTime) SampleRate() float64
The sampling rate that the sample time property expresses.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime/sampleRate
func (AVAudioTime) SampleTime ¶
func (a AVAudioTime) SampleTime() AVAudioFramePosition
The time as a number of audio samples that the current audio device tracks.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime/sampleTime
func (AVAudioTime) SampleTimeValid ¶
func (a AVAudioTime) SampleTimeValid() bool
A Boolean value that indicates whether the sample time and sample rate properties are in a valid state.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime/isSampleTimeValid
type AVAudioTimeClass ¶
type AVAudioTimeClass struct {
// contains filtered or unexported fields
}
func GetAVAudioTimeClass ¶
func GetAVAudioTimeClass() AVAudioTimeClass
GetAVAudioTimeClass returns the class object for AVAudioTime.
func (AVAudioTimeClass) Alloc ¶
func (ac AVAudioTimeClass) Alloc() AVAudioTime
Alloc allocates memory for a new instance of the class.
func (AVAudioTimeClass) Class ¶
func (ac AVAudioTimeClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
func (AVAudioTimeClass) HostTimeForSeconds ¶
func (_AVAudioTimeClass AVAudioTimeClass) HostTimeForSeconds(seconds float64) uint64
Converts seconds to host time.
seconds: The number of seconds.
Return Value ¶
The host time that represents the seconds you specify.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime/hostTime(forSeconds:)
func (AVAudioTimeClass) SecondsForHostTime ¶
func (_AVAudioTimeClass AVAudioTimeClass) SecondsForHostTime(hostTime uint64) float64
Converts host time to seconds.
hostTime: The host time.
Return Value ¶
The number of seconds that represent the host time you specify.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime/seconds(forHostTime:)
func (AVAudioTimeClass) TimeWithAudioTimeStampSampleRate ¶
func (_AVAudioTimeClass AVAudioTimeClass) TimeWithAudioTimeStampSampleRate(ts objectivec.IObject, sampleRate float64) AVAudioTime
Creates an audio time object with the specified timestamp and sample rate.
ts: The timestamp.
sampleRate: The sample rate.
Return Value ¶
A new AVAudioTime instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime/timeWithAudioTimeStamp:sampleRate:
func (AVAudioTimeClass) TimeWithHostTime ¶
func (_AVAudioTimeClass AVAudioTimeClass) TimeWithHostTime(hostTime uint64) AVAudioTime
Creates an audio time object with the specified host time.
hostTime: The host time.
Return Value ¶
A new AVAudioTime instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime/timeWithHostTime:
func (AVAudioTimeClass) TimeWithHostTimeSampleTimeAtRate ¶
func (_AVAudioTimeClass AVAudioTimeClass) TimeWithHostTimeSampleTimeAtRate(hostTime uint64, sampleTime AVAudioFramePosition, sampleRate float64) AVAudioTime
Creates an audio time object with the specified host time, sample time, and sample rate.
hostTime: The host time.
sampleTime: The sample time.
sampleRate: The sample rate.
Return Value ¶
A new AVAudioTime instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime/timeWithHostTime:sampleTime:atRate:
func (AVAudioTimeClass) TimeWithSampleTimeAtRate ¶
func (_AVAudioTimeClass AVAudioTimeClass) TimeWithSampleTimeAtRate(sampleTime AVAudioFramePosition, sampleRate float64) AVAudioTime
Creates an audio time object with the specified sample time and sample rate.
sampleTime: The sample time.
sampleRate: The sample rate.
Return Value ¶
A new AVAudioTime instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime/timeWithSampleTime:atRate:
type AVAudioUnit ¶
type AVAudioUnit struct {
AVAudioNode
}
A subclass of the audio node class that, processes audio either in real time or nonreal time, depending on the type of the audio unit.
Getting the Core Audio audio unit ¶
- AVAudioUnit.AudioUnit: The underlying Core Audio audio unit.
Loading an audio preset file ¶
- AVAudioUnit.LoadAudioUnitPresetAtURLError: Loads an audio unit using a specified preset.
Getting audio unit values ¶
- AVAudioUnit.AudioComponentDescription: The audio component description that represents the underlying Core Audio audio unit.
- AVAudioUnit.ManufacturerName: The name of the manufacturer of the audio unit.
- AVAudioUnit.Name: The name of the audio unit.
- AVAudioUnit.Version: The version number of the audio unit.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnit
func AVAudioUnitFromID ¶
func AVAudioUnitFromID(id objc.ID) AVAudioUnit
AVAudioUnitFromID constructs a AVAudioUnit from an objc.ID.
A subclass of the audio node class that, processes audio either in real time or nonreal time, depending on the type of the audio unit.
func NewAVAudioUnit ¶
func NewAVAudioUnit() AVAudioUnit
NewAVAudioUnit creates a new AVAudioUnit instance.
func (AVAudioUnit) AudioComponentDescription ¶
func (a AVAudioUnit) AudioComponentDescription() objectivec.IObject
The audio component description that represents the underlying Core Audio audio unit.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnit/audioComponentDescription
func (AVAudioUnit) AudioUnit ¶
func (a AVAudioUnit) AudioUnit() IAVAudioUnit
The underlying Core Audio audio unit.
Discussion ¶
This property is a reference to the underlying audio unit. The AVAudioUnit exposes it here so that you can modify parameters, that you don’t see through AVAudioUnit subclasses, using the AudioUnit C API. For example, changing initialization state, stream formats, channel layouts, or connections to other audio units.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnit/audioUnit
func (AVAudioUnit) Autorelease ¶
func (a AVAudioUnit) Autorelease() AVAudioUnit
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioUnit) LoadAudioUnitPresetAtURLError ¶
func (a AVAudioUnit) LoadAudioUnitPresetAtURLError(url foundation.INSURL) (bool, error)
Loads an audio unit using a specified preset.
url: The URL of an audio unit preset file.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnit/loadPreset(at:)
func (AVAudioUnit) ManufacturerName ¶
func (a AVAudioUnit) ManufacturerName() string
The name of the manufacturer of the audio unit.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnit/manufacturerName
func (AVAudioUnit) Name ¶
func (a AVAudioUnit) Name() string
The name of the audio unit.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnit/name
func (AVAudioUnit) Version ¶
func (a AVAudioUnit) Version() uint
The version number of the audio unit.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnit/version
type AVAudioUnitClass ¶
type AVAudioUnitClass struct {
// contains filtered or unexported fields
}
func GetAVAudioUnitClass ¶
func GetAVAudioUnitClass() AVAudioUnitClass
GetAVAudioUnitClass returns the class object for AVAudioUnit.
func (AVAudioUnitClass) Alloc ¶
func (ac AVAudioUnitClass) Alloc() AVAudioUnit
Alloc allocates memory for a new instance of the class.
func (AVAudioUnitClass) Class ¶
func (ac AVAudioUnitClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
func (AVAudioUnitClass) InstantiateWithComponentDescriptionOptions ¶
func (ac AVAudioUnitClass) InstantiateWithComponentDescriptionOptions(ctx context.Context, audioComponentDescription objectivec.IObject, options objectivec.IObject) (*AVAudioUnit, error)
InstantiateWithComponentDescriptionOptions is a synchronous wrapper around [AVAudioUnit.InstantiateWithComponentDescriptionOptionsCompletionHandler]. It blocks until the completion handler fires or the context is cancelled.
func (AVAudioUnitClass) InstantiateWithComponentDescriptionOptionsCompletionHandler ¶
func (_AVAudioUnitClass AVAudioUnitClass) InstantiateWithComponentDescriptionOptionsCompletionHandler(audioComponentDescription objectivec.IObject, options objectivec.IObject, completionHandler AVAudioUnitErrorHandler)
Creates an instance of an audio unit component asynchronously and wraps it in an audio unit class.
audioComponentDescription: The component to create.
options: The options the method uses to create the component.
completionHandler: A handler the framework calls in an arbitrary thread context when creation completes. Retain the AVAudioUnit this handler provides.
audioComponentDescription is a [audiotoolbox.AudioComponentDescription].
options is a [audiotoolbox.AudioComponentInstantiationOptions].
Discussion ¶
You must create components with flags that include requiresAsyncInstantiation asynchronously through this method if they’re for use with AVAudioEngine.
The AVAudioUnit instance is usually a subclass that the method selects according to the components type. For example, AVAudioUnitEffect, AVAudioUnitGenerator, AVAudioUnitMIDIInstrument, or AVAudioUnitTimeEffect.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnit/instantiate(with:options:completionHandler:) audioComponentDescription is a [audiotoolbox.AudioComponentDescription]. options is a [audiotoolbox.AudioComponentInstantiationOptions].
type AVAudioUnitComponent ¶
type AVAudioUnitComponent struct {
objectivec.Object
}
An object that provides details about an audio unit.
Overview ¶
Details can include information such as type, subtype, manufacturer, and location. An AVAudioUnitComponent can include user tags, which you can query later for display.
Getting the audio unit component’s audio unit ¶
- AVAudioUnitComponent.AudioComponent: The underlying audio component.
Getting audio unit component information ¶
- AVAudioUnitComponent.AudioComponentDescription: The audio component description.
- AVAudioUnitComponent.AvailableArchitectures: An array of architectures that the audio unit supports.
- AVAudioUnitComponent.ConfigurationDictionary: The audio unit component’s configuration dictionary.
- AVAudioUnitComponent.HasCustomView: A Boolean value that indicates whether the audio unit component has a custom view.
- AVAudioUnitComponent.HasMIDIInput: A Boolean value that indicates whether the audio unit component has MIDI input.
- AVAudioUnitComponent.HasMIDIOutput: A Boolean value that indicates whether the audio unit component has MIDI output.
- AVAudioUnitComponent.ManufacturerName: The name of the manufacturer of the audio unit component.
- AVAudioUnitComponent.Name: The name of the audio unit component.
- AVAudioUnitComponent.PassesAUVal: A Boolean value that indicates whether the audio unit component passes the validation tests.
- AVAudioUnitComponent.SandboxSafe: A Boolean value that indicates whether the audio unit component is safe for sandboxing.
- AVAudioUnitComponent.SupportsNumberInputChannelsOutputChannels: Gets a Boolean value that indicates whether the audio unit component supports the specified number of input and output channels.
- AVAudioUnitComponent.TypeName: The audio unit component type.
- AVAudioUnitComponent.Version: The audio unit component version number.
- AVAudioUnitComponent.VersionString: A string that represents the audio unit component version number.
Getting audio unit component tags ¶
- AVAudioUnitComponent.IconURL: The URL of an icon that represents the audio unit component.
- AVAudioUnitComponent.Icon: An icon that represents the component.
- AVAudioUnitComponent.LocalizedTypeName: The localized type name of the component.
- AVAudioUnitComponent.AllTagNames: An array of tag names for the audio unit component.
- AVAudioUnitComponent.UserTagNames: An array of tags the user creates.
- AVAudioUnitComponent.SetUserTagNames
Audio unit manufacturer names ¶
- AVAudioUnitComponent.AVAudioUnitManufacturerNameApple: The audio unit manufacturer is Apple.
Audio unit types ¶
- AVAudioUnitComponent.AVAudioUnitTypeOutput: An audio unit type that represents an output.
- AVAudioUnitComponent.AVAudioUnitTypeMusicDevice: An audio unit type that represents a music device.
- AVAudioUnitComponent.AVAudioUnitTypeMusicEffect: An audio unit type that represents a music effect.
- AVAudioUnitComponent.AVAudioUnitTypeFormatConverter: An audio unit type that represents a format converter.
- AVAudioUnitComponent.AVAudioUnitTypeEffect: An audio unit type that represents an effect.
- AVAudioUnitComponent.AVAudioUnitTypeMixer: An audio unit type that represents a mixer.
- AVAudioUnitComponent.AVAudioUnitTypePanner: An audio unit type that represents a panner.
- AVAudioUnitComponent.AVAudioUnitTypeGenerator: An audio unit type that represents a generator.
- AVAudioUnitComponent.AVAudioUnitTypeOfflineEffect: An audio unit type that represents an offline effect.
- AVAudioUnitComponent.AVAudioUnitTypeMIDIProcessor: An audio unit type that represents a MIDI processor.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponent
func AVAudioUnitComponentFromID ¶
func AVAudioUnitComponentFromID(id objc.ID) AVAudioUnitComponent
AVAudioUnitComponentFromID constructs a AVAudioUnitComponent from an objc.ID.
An object that provides details about an audio unit.
func NewAVAudioUnitComponent ¶
func NewAVAudioUnitComponent() AVAudioUnitComponent
NewAVAudioUnitComponent creates a new AVAudioUnitComponent instance.
func (AVAudioUnitComponent) AVAudioUnitManufacturerNameApple ¶
func (a AVAudioUnitComponent) AVAudioUnitManufacturerNameApple() string
The audio unit manufacturer is Apple.
See: https://developer.apple.com/documentation/avfaudio/avaudiounitmanufacturernameapple
func (AVAudioUnitComponent) AVAudioUnitTypeEffect ¶
func (a AVAudioUnitComponent) AVAudioUnitTypeEffect() string
An audio unit type that represents an effect.
See: https://developer.apple.com/documentation/avfaudio/avaudiounittypeeffect
func (AVAudioUnitComponent) AVAudioUnitTypeFormatConverter ¶
func (a AVAudioUnitComponent) AVAudioUnitTypeFormatConverter() string
An audio unit type that represents a format converter.
See: https://developer.apple.com/documentation/avfaudio/avaudiounittypeformatconverter
func (AVAudioUnitComponent) AVAudioUnitTypeGenerator ¶
func (a AVAudioUnitComponent) AVAudioUnitTypeGenerator() string
An audio unit type that represents a generator.
See: https://developer.apple.com/documentation/avfaudio/avaudiounittypegenerator
func (AVAudioUnitComponent) AVAudioUnitTypeMIDIProcessor ¶
func (a AVAudioUnitComponent) AVAudioUnitTypeMIDIProcessor() string
An audio unit type that represents a MIDI processor.
See: https://developer.apple.com/documentation/avfaudio/avaudiounittypemidiprocessor
func (AVAudioUnitComponent) AVAudioUnitTypeMixer ¶
func (a AVAudioUnitComponent) AVAudioUnitTypeMixer() string
An audio unit type that represents a mixer.
See: https://developer.apple.com/documentation/avfaudio/avaudiounittypemixer
func (AVAudioUnitComponent) AVAudioUnitTypeMusicDevice ¶
func (a AVAudioUnitComponent) AVAudioUnitTypeMusicDevice() string
An audio unit type that represents a music device.
See: https://developer.apple.com/documentation/avfaudio/avaudiounittypemusicdevice
func (AVAudioUnitComponent) AVAudioUnitTypeMusicEffect ¶
func (a AVAudioUnitComponent) AVAudioUnitTypeMusicEffect() string
An audio unit type that represents a music effect.
See: https://developer.apple.com/documentation/avfaudio/avaudiounittypemusiceffect
func (AVAudioUnitComponent) AVAudioUnitTypeOfflineEffect ¶
func (a AVAudioUnitComponent) AVAudioUnitTypeOfflineEffect() string
An audio unit type that represents an offline effect.
See: https://developer.apple.com/documentation/avfaudio/avaudiounittypeofflineeffect
func (AVAudioUnitComponent) AVAudioUnitTypeOutput ¶
func (a AVAudioUnitComponent) AVAudioUnitTypeOutput() string
An audio unit type that represents an output.
See: https://developer.apple.com/documentation/avfaudio/avaudiounittypeoutput
func (AVAudioUnitComponent) AVAudioUnitTypePanner ¶
func (a AVAudioUnitComponent) AVAudioUnitTypePanner() string
An audio unit type that represents a panner.
See: https://developer.apple.com/documentation/avfaudio/avaudiounittypepanner
func (AVAudioUnitComponent) AllTagNames ¶
func (a AVAudioUnitComponent) AllTagNames() []string
An array of tag names for the audio unit component.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponent/allTagNames
func (AVAudioUnitComponent) AudioComponent ¶
func (a AVAudioUnitComponent) AudioComponent() objectivec.IObject
The underlying audio component.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponent/audioComponent
func (AVAudioUnitComponent) AudioComponentDescription ¶
func (a AVAudioUnitComponent) AudioComponentDescription() objectivec.IObject
The audio component description.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponent/audioComponentDescription
func (AVAudioUnitComponent) Autorelease ¶
func (a AVAudioUnitComponent) Autorelease() AVAudioUnitComponent
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioUnitComponent) AvailableArchitectures ¶
func (a AVAudioUnitComponent) AvailableArchitectures() []foundation.NSNumber
An array of architectures that the audio unit supports.
Discussion ¶
This is an [NSArray] of [NSNumbers] where each entry corresponds to one of the constants in Mach-O Architecture in Bundle.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponent/availableArchitectures
func (AVAudioUnitComponent) ConfigurationDictionary ¶
func (a AVAudioUnitComponent) ConfigurationDictionary() foundation.INSDictionary
The audio unit component’s configuration dictionary.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponent/configurationDictionary
func (AVAudioUnitComponent) HasCustomView ¶
func (a AVAudioUnitComponent) HasCustomView() bool
A Boolean value that indicates whether the audio unit component has a custom view.
Discussion ¶
This property is true if the component has a custom view; otherwise, false.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponent/hasCustomView
func (AVAudioUnitComponent) HasMIDIInput ¶
func (a AVAudioUnitComponent) HasMIDIInput() bool
A Boolean value that indicates whether the audio unit component has MIDI input.
Discussion ¶
This property is true if the component has MIDI input; otherwise, false.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponent/hasMIDIInput
func (AVAudioUnitComponent) HasMIDIOutput ¶
func (a AVAudioUnitComponent) HasMIDIOutput() bool
A Boolean value that indicates whether the audio unit component has MIDI output.
Discussion ¶
This property is true if the component has MIDI output; otherwise, false.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponent/hasMIDIOutput
func (AVAudioUnitComponent) Icon ¶
func (a AVAudioUnitComponent) Icon() objc.ID
An icon that represents the component.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponent/icon
func (AVAudioUnitComponent) IconURL ¶
func (a AVAudioUnitComponent) IconURL() foundation.INSURL
The URL of an icon that represents the audio unit component.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponent/iconURL
func (AVAudioUnitComponent) Init ¶
func (a AVAudioUnitComponent) Init() AVAudioUnitComponent
Init initializes the instance.
func (AVAudioUnitComponent) LocalizedTypeName ¶
func (a AVAudioUnitComponent) LocalizedTypeName() string
The localized type name of the component.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponent/localizedTypeName
func (AVAudioUnitComponent) ManufacturerName ¶
func (a AVAudioUnitComponent) ManufacturerName() string
The name of the manufacturer of the audio unit component.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponent/manufacturerName
func (AVAudioUnitComponent) Name ¶
func (a AVAudioUnitComponent) Name() string
The name of the audio unit component.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponent/name
func (AVAudioUnitComponent) PassesAUVal ¶
func (a AVAudioUnitComponent) PassesAUVal() bool
A Boolean value that indicates whether the audio unit component passes the validation tests.
Discussion ¶
This property is true if the component passes the validation tests; otherwise, false.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponent/passesAUVal
func (AVAudioUnitComponent) SandboxSafe ¶
func (a AVAudioUnitComponent) SandboxSafe() bool
A Boolean value that indicates whether the audio unit component is safe for sandboxing.
Discussion ¶
This property is true if the component is safe for sandboxing; otherwise, false. This only applies to the current process.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponent/isSandboxSafe
func (AVAudioUnitComponent) SetUserTagNames ¶
func (a AVAudioUnitComponent) SetUserTagNames(value []string)
func (AVAudioUnitComponent) SupportsNumberInputChannelsOutputChannels ¶
func (a AVAudioUnitComponent) SupportsNumberInputChannelsOutputChannels(numInputChannels int, numOutputChannels int) bool
Gets a Boolean value that indicates whether the audio unit component supports the specified number of input and output channels.
numInputChannels: The number of input channels.
numOutputChannels: The number of output channels.
Return Value ¶
A value of true if the audio unit component supports the specified number of input and output channels; otherwise, false.
func (AVAudioUnitComponent) TypeName ¶
func (a AVAudioUnitComponent) TypeName() string
The audio unit component type.
Discussion ¶
For information about possible values, see the “Audio Unit Types” topic under AVAudioUnitComponent.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponent/typeName
func (AVAudioUnitComponent) UserTagNames ¶
func (a AVAudioUnitComponent) UserTagNames() []string
An array of tags the user creates.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponent/userTagNames
func (AVAudioUnitComponent) Version ¶
func (a AVAudioUnitComponent) Version() uint
The audio unit component version number.
Discussion ¶
The version number is an [NSNumber] made of a hexadecimal number with a major, minor, and dot-release format, such as `0xMMMMmmDD`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponent/version
func (AVAudioUnitComponent) VersionString ¶
func (a AVAudioUnitComponent) VersionString() string
A string that represents the audio unit component version number.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponent/versionString
type AVAudioUnitComponentClass ¶
type AVAudioUnitComponentClass struct {
// contains filtered or unexported fields
}
func GetAVAudioUnitComponentClass ¶
func GetAVAudioUnitComponentClass() AVAudioUnitComponentClass
GetAVAudioUnitComponentClass returns the class object for AVAudioUnitComponent.
func (AVAudioUnitComponentClass) Alloc ¶
func (ac AVAudioUnitComponentClass) Alloc() AVAudioUnitComponent
Alloc allocates memory for a new instance of the class.
func (AVAudioUnitComponentClass) Class ¶
func (ac AVAudioUnitComponentClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioUnitComponentHandler ¶
type AVAudioUnitComponentHandler = func(*AVAudioUnitComponent)
AVAudioUnitComponentHandler handles The block to apply to the audio unit components.
- comp: A block to test.
- stop: A reference to a Boolean value. To stop further processing of the search, the block sets the value to [true](<doc://com.apple.documentation/documentation/Swift/true>). The stop argument is an out-only argument. Only set this Boolean to [true](<doc://com.apple.documentation/documentation/Swift/true>) within the block.
Used by:
type AVAudioUnitComponentManager ¶
type AVAudioUnitComponentManager struct {
objectivec.Object
}
An object that provides a way to search and query audio components that the system registers.
Overview ¶
The component manager has methods to find various information about the audio components without opening them. Currently, you can only search audio components that are audio units.
The class supports system tags and arbitrary user tags. You can tag each audio unit as part of its definition. Audio unit hosts, such as Logic or GarageBand, can present groupings of audio units according to the tags.
You can search for audio units in the following ways:
- Using a [NSPredicate] instance that contains search strings for tags or descriptions - Using a block to match on a custom criteria - Using an [AudioComponentDescription]
Getting matching audio components ¶
- AVAudioUnitComponentManager.ComponentsMatchingDescription: Gets an array of audio component objects that match the description.
- AVAudioUnitComponentManager.ComponentsMatchingPredicate: Gets an array of audio component objects that match the search predicate.
- AVAudioUnitComponentManager.ComponentsPassingTest: Gets an array of audio components that pass the block method.
Getting audio unit tags ¶
- AVAudioUnitComponentManager.StandardLocalizedTagNames: An array of the localized standard system tags the audio units define.
- AVAudioUnitComponentManager.TagNames: An array of all tags the audio unit associates with the current user, and the system tags the audio units define.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponentManager
func AVAudioUnitComponentManagerFromID ¶
func AVAudioUnitComponentManagerFromID(id objc.ID) AVAudioUnitComponentManager
AVAudioUnitComponentManagerFromID constructs a AVAudioUnitComponentManager from an objc.ID.
An object that provides a way to search and query audio components that the system registers.
func NewAVAudioUnitComponentManager ¶
func NewAVAudioUnitComponentManager() AVAudioUnitComponentManager
NewAVAudioUnitComponentManager creates a new AVAudioUnitComponentManager instance.
func (AVAudioUnitComponentManager) Autorelease ¶
func (a AVAudioUnitComponentManager) Autorelease() AVAudioUnitComponentManager
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioUnitComponentManager) ComponentsMatchingDescription ¶
func (a AVAudioUnitComponentManager) ComponentsMatchingDescription(desc objectivec.IObject) []AVAudioUnitComponent
Gets an array of audio component objects that match the description.
desc is a [audiotoolbox.AudioComponentDescription].
Return Value ¶
An array of [AVAudioComponent] objects that match the `description`.
Discussion ¶
- desc: The AudioComponentDescription structure to match. The method uses the `type`, `subtype` and `manufacturer` fields to search for matching audio units. A value of `0` for any of these fields is a wildcard and returns the first match the method finds.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponentManager/components(matching:)-9qt94 desc is a [audiotoolbox.AudioComponentDescription].
func (AVAudioUnitComponentManager) ComponentsMatchingPredicate ¶
func (a AVAudioUnitComponentManager) ComponentsMatchingPredicate(predicate foundation.INSPredicate) []AVAudioUnitComponent
Gets an array of audio component objects that match the search predicate.
predicate: The search predicate.
Return Value ¶
An array of [AVAudioComponent] objects that match the predicate.
Discussion ¶
You use the audio component’s information or tags to build search criteria, such as `“typeName CONTAINS 'Effect'"` or `“tags IN {'Sampler', 'MIDI'}"`.
func (AVAudioUnitComponentManager) ComponentsPassingTest ¶
func (a AVAudioUnitComponentManager) ComponentsPassingTest(testHandler AVAudioUnitComponentHandler) []AVAudioUnitComponent
Gets an array of audio components that pass the block method.
testHandler: The block to apply to the audio unit components.
The block takes two parameters.
comp: A block to test. stop: A reference to a Boolean value. To stop further processing of the search, the block sets the value to true. The stop argument is an out-only argument. Only set this Boolean to true within the block.
The block returns a Boolean value that indicates whether `comp` passes the test. Returning true stops further processing of the audio components. // true: https://developer.apple.com/documentation/Swift/true
Return Value ¶
An array of audio components that pass the test.
Discussion ¶
For each audio component the manager finds, the system calls the block method. If the block returns true, the method adds [AVAudioComponent] instance to the array.
func (AVAudioUnitComponentManager) Init ¶
func (a AVAudioUnitComponentManager) Init() AVAudioUnitComponentManager
Init initializes the instance.
func (AVAudioUnitComponentManager) StandardLocalizedTagNames ¶
func (a AVAudioUnitComponentManager) StandardLocalizedTagNames() []string
An array of the localized standard system tags the audio units define.
func (AVAudioUnitComponentManager) TagNames ¶
func (a AVAudioUnitComponentManager) TagNames() []string
An array of all tags the audio unit associates with the current user, and the system tags the audio units define.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponentManager/tagNames
type AVAudioUnitComponentManagerClass ¶
type AVAudioUnitComponentManagerClass struct {
// contains filtered or unexported fields
}
func GetAVAudioUnitComponentManagerClass ¶
func GetAVAudioUnitComponentManagerClass() AVAudioUnitComponentManagerClass
GetAVAudioUnitComponentManagerClass returns the class object for AVAudioUnitComponentManager.
func (AVAudioUnitComponentManagerClass) Alloc ¶
func (ac AVAudioUnitComponentManagerClass) Alloc() AVAudioUnitComponentManager
Alloc allocates memory for a new instance of the class.
func (AVAudioUnitComponentManagerClass) Class ¶
func (ac AVAudioUnitComponentManagerClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
func (AVAudioUnitComponentManagerClass) SharedAudioUnitComponentManager ¶
func (_AVAudioUnitComponentManagerClass AVAudioUnitComponentManagerClass) SharedAudioUnitComponentManager() AVAudioUnitComponentManager
Gets the shared component manager instance.
Return Value ¶
The singleton instance of the AVAudioUnitComponentManager object.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponentManager/shared()
type AVAudioUnitDelay ¶
type AVAudioUnitDelay struct {
AVAudioUnitEffect
}
An object that implements a delay effect.
Overview ¶
A delay unit delays the input signal by the specified time interval and then blends it with the input signal. You can also control the amount of high-frequency roll-off to simulate the effect of a tape delay.
Getting and setting the delay values ¶
- AVAudioUnitDelay.DelayTime: The time for the input signal to reach the output.
- AVAudioUnitDelay.SetDelayTime
- AVAudioUnitDelay.Feedback: The amount of the output signal that feeds back into the delay line.
- AVAudioUnitDelay.SetFeedback
- AVAudioUnitDelay.LowPassCutoff: The cutoff frequency above which high frequency content rolls off, in hertz.
- AVAudioUnitDelay.SetLowPassCutoff
- AVAudioUnitDelay.WetDryMix: The blend of the wet and dry signals.
- AVAudioUnitDelay.SetWetDryMix
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitDelay
func AVAudioUnitDelayFromID ¶
func AVAudioUnitDelayFromID(id objc.ID) AVAudioUnitDelay
AVAudioUnitDelayFromID constructs a AVAudioUnitDelay from an objc.ID.
An object that implements a delay effect.
func NewAVAudioUnitDelay ¶
func NewAVAudioUnitDelay() AVAudioUnitDelay
NewAVAudioUnitDelay creates a new AVAudioUnitDelay instance.
func NewAudioUnitDelayWithAudioComponentDescription ¶
func NewAudioUnitDelayWithAudioComponentDescription(audioComponentDescription objectivec.IObject) AVAudioUnitDelay
Creates an audio unit effect object with the specified description.
audioComponentDescription: The description of the audio unit to create.
The `audioComponentDescription` must be one of these types `kAudioUnitType_Effect`, `kAudioUnitType_MusicEffect`, `kAudioUnitType_Panner`, `kAudioUnitType_RemoteEffect`, or `kAudioUnitType_RemoteMusicEffect`.
Return Value ¶
A new AVAudioUnitEffect instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEffect/init(audioComponentDescription:) audioComponentDescription is a [audiotoolbox.AudioComponentDescription].
func (AVAudioUnitDelay) Autorelease ¶
func (a AVAudioUnitDelay) Autorelease() AVAudioUnitDelay
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioUnitDelay) DelayTime ¶
func (a AVAudioUnitDelay) DelayTime() float64
The time for the input signal to reach the output.
Discussion ¶
You specify the delay in seconds. The default value is `1`. The valid range of values is `0` to `2` seconds.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitDelay/delayTime
func (AVAudioUnitDelay) Feedback ¶
func (a AVAudioUnitDelay) Feedback() float32
The amount of the output signal that feeds back into the delay line.
Discussion ¶
You specify the feedback as a percentage. The default value is `50%`. The valid range of values is `-100%` to `100%`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitDelay/feedback
func (AVAudioUnitDelay) Init ¶
func (a AVAudioUnitDelay) Init() AVAudioUnitDelay
Init initializes the instance.
func (AVAudioUnitDelay) LowPassCutoff ¶
func (a AVAudioUnitDelay) LowPassCutoff() float32
The cutoff frequency above which high frequency content rolls off, in hertz.
Discussion ¶
The default value is `15000 Hz`. The valid range of values is `10 Hz` through `(sampleRate/2)`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitDelay/lowPassCutoff
func (AVAudioUnitDelay) SetDelayTime ¶
func (a AVAudioUnitDelay) SetDelayTime(value float64)
func (AVAudioUnitDelay) SetFeedback ¶
func (a AVAudioUnitDelay) SetFeedback(value float32)
func (AVAudioUnitDelay) SetLowPassCutoff ¶
func (a AVAudioUnitDelay) SetLowPassCutoff(value float32)
func (AVAudioUnitDelay) SetWetDryMix ¶
func (a AVAudioUnitDelay) SetWetDryMix(value float32)
func (AVAudioUnitDelay) WetDryMix ¶
func (a AVAudioUnitDelay) WetDryMix() float32
The blend of the wet and dry signals.
Discussion ¶
You specify the blend as a percentage. The default value is `100%`. The valid range of values is `0%` through `100%`, where `0%` represents all dry.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitDelay/wetDryMix
type AVAudioUnitDelayClass ¶
type AVAudioUnitDelayClass struct {
// contains filtered or unexported fields
}
func GetAVAudioUnitDelayClass ¶
func GetAVAudioUnitDelayClass() AVAudioUnitDelayClass
GetAVAudioUnitDelayClass returns the class object for AVAudioUnitDelay.
func (AVAudioUnitDelayClass) Alloc ¶
func (ac AVAudioUnitDelayClass) Alloc() AVAudioUnitDelay
Alloc allocates memory for a new instance of the class.
func (AVAudioUnitDelayClass) Class ¶
func (ac AVAudioUnitDelayClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioUnitDistortion ¶
type AVAudioUnitDistortion struct {
AVAudioUnitEffect
}
An object that implements a multistage distortion effect.
Configuring the distortion ¶
- AVAudioUnitDistortion.LoadFactoryPreset: Configures the audio distortion unit by loading a distortion preset.
Getting and setting the distortion values ¶
- AVAudioUnitDistortion.PreGain: The gain that the audio unit applies to the signal before distortion, in decibels.
- AVAudioUnitDistortion.SetPreGain
- AVAudioUnitDistortion.WetDryMix: The blend of the distorted and dry signals.
- AVAudioUnitDistortion.SetWetDryMix
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitDistortion
func AVAudioUnitDistortionFromID ¶
func AVAudioUnitDistortionFromID(id objc.ID) AVAudioUnitDistortion
AVAudioUnitDistortionFromID constructs a AVAudioUnitDistortion from an objc.ID.
An object that implements a multistage distortion effect.
func NewAVAudioUnitDistortion ¶
func NewAVAudioUnitDistortion() AVAudioUnitDistortion
NewAVAudioUnitDistortion creates a new AVAudioUnitDistortion instance.
func NewAudioUnitDistortionWithAudioComponentDescription ¶
func NewAudioUnitDistortionWithAudioComponentDescription(audioComponentDescription objectivec.IObject) AVAudioUnitDistortion
Creates an audio unit effect object with the specified description.
audioComponentDescription: The description of the audio unit to create.
The `audioComponentDescription` must be one of these types `kAudioUnitType_Effect`, `kAudioUnitType_MusicEffect`, `kAudioUnitType_Panner`, `kAudioUnitType_RemoteEffect`, or `kAudioUnitType_RemoteMusicEffect`.
Return Value ¶
A new AVAudioUnitEffect instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEffect/init(audioComponentDescription:) audioComponentDescription is a [audiotoolbox.AudioComponentDescription].
func (AVAudioUnitDistortion) Autorelease ¶
func (a AVAudioUnitDistortion) Autorelease() AVAudioUnitDistortion
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioUnitDistortion) Init ¶
func (a AVAudioUnitDistortion) Init() AVAudioUnitDistortion
Init initializes the instance.
func (AVAudioUnitDistortion) LoadFactoryPreset ¶
func (a AVAudioUnitDistortion) LoadFactoryPreset(preset AVAudioUnitDistortionPreset)
Configures the audio distortion unit by loading a distortion preset.
preset: The distortion preset.
Discussion ¶
For more information about possible values for `preset`, see AVAudioUnitDistortionPreset. The default value is [AudioUnitDistortionPresetDrumsBitBrush].
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitDistortion/loadFactoryPreset(_:)
func (AVAudioUnitDistortion) PreGain ¶
func (a AVAudioUnitDistortion) PreGain() float32
The gain that the audio unit applies to the signal before distortion, in decibels.
Discussion ¶
The default value is `-6 dB`. The valid range of values is `-80 dB` to `20 dB`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitDistortion/preGain
func (AVAudioUnitDistortion) SetPreGain ¶
func (a AVAudioUnitDistortion) SetPreGain(value float32)
func (AVAudioUnitDistortion) SetWetDryMix ¶
func (a AVAudioUnitDistortion) SetWetDryMix(value float32)
func (AVAudioUnitDistortion) WetDryMix ¶
func (a AVAudioUnitDistortion) WetDryMix() float32
The blend of the distorted and dry signals.
Discussion ¶
You specify the blend as a percentage. The default value is `50%`. The valid range is `0%` through `100%`, where `0` represents all dry.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitDistortion/wetDryMix
type AVAudioUnitDistortionClass ¶
type AVAudioUnitDistortionClass struct {
// contains filtered or unexported fields
}
func GetAVAudioUnitDistortionClass ¶
func GetAVAudioUnitDistortionClass() AVAudioUnitDistortionClass
GetAVAudioUnitDistortionClass returns the class object for AVAudioUnitDistortion.
func (AVAudioUnitDistortionClass) Alloc ¶
func (ac AVAudioUnitDistortionClass) Alloc() AVAudioUnitDistortion
Alloc allocates memory for a new instance of the class.
func (AVAudioUnitDistortionClass) Class ¶
func (ac AVAudioUnitDistortionClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioUnitDistortionPreset ¶
type AVAudioUnitDistortionPreset int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitDistortionPreset
const ( // AVAudioUnitDistortionPresetDrumsBitBrush: A preset that represents a bit brush drums distortion. AVAudioUnitDistortionPresetDrumsBitBrush AVAudioUnitDistortionPreset = 0 // AVAudioUnitDistortionPresetDrumsBufferBeats: A preset that represents a buffer beat drums distortion. AVAudioUnitDistortionPresetDrumsBufferBeats AVAudioUnitDistortionPreset = 1 // AVAudioUnitDistortionPresetDrumsLoFi: A preset that represents a low fidelity drums distortion. AVAudioUnitDistortionPresetDrumsLoFi AVAudioUnitDistortionPreset = 2 // AVAudioUnitDistortionPresetMultiBrokenSpeaker: A preset that represents a broken speaker distortion. AVAudioUnitDistortionPresetMultiBrokenSpeaker AVAudioUnitDistortionPreset = 3 // AVAudioUnitDistortionPresetMultiCellphoneConcert: A preset that represents a cellphone concert distortion. AVAudioUnitDistortionPresetMultiCellphoneConcert AVAudioUnitDistortionPreset = 4 // AVAudioUnitDistortionPresetMultiDecimated1: A preset that represents a variant of the decimated distortion. AVAudioUnitDistortionPresetMultiDecimated1 AVAudioUnitDistortionPreset = 5 // AVAudioUnitDistortionPresetMultiDecimated2: A preset that represents a variant of the decimated distortion. AVAudioUnitDistortionPresetMultiDecimated2 AVAudioUnitDistortionPreset = 6 // AVAudioUnitDistortionPresetMultiDecimated3: A preset that represents a variant of the decimated distortion. AVAudioUnitDistortionPresetMultiDecimated3 AVAudioUnitDistortionPreset = 7 // AVAudioUnitDistortionPresetMultiDecimated4: A preset that represents a variant of the decimated distortion. AVAudioUnitDistortionPresetMultiDecimated4 AVAudioUnitDistortionPreset = 8 // AVAudioUnitDistortionPresetMultiDistortedCubed: A preset that represents a distorted cubed distortion. AVAudioUnitDistortionPresetMultiDistortedCubed AVAudioUnitDistortionPreset = 10 // AVAudioUnitDistortionPresetMultiDistortedFunk: A preset that represents a distorted funk distortion. AVAudioUnitDistortionPresetMultiDistortedFunk AVAudioUnitDistortionPreset = 9 // AVAudioUnitDistortionPresetMultiDistortedSquared: A preset that represents a distorted squared distortion. AVAudioUnitDistortionPresetMultiDistortedSquared AVAudioUnitDistortionPreset = 11 // AVAudioUnitDistortionPresetMultiEcho1: A preset that represents a variant of an echo distortion. AVAudioUnitDistortionPresetMultiEcho1 AVAudioUnitDistortionPreset = 12 // AVAudioUnitDistortionPresetMultiEcho2: A preset that represents a variant of an echo distortion. AVAudioUnitDistortionPresetMultiEcho2 AVAudioUnitDistortionPreset = 13 // AVAudioUnitDistortionPresetMultiEchoTight1: A preset that represents a variant of a tight echo distortion. AVAudioUnitDistortionPresetMultiEchoTight1 AVAudioUnitDistortionPreset = 14 // AVAudioUnitDistortionPresetMultiEchoTight2: A preset that represents a variant of a tight echo distortion. AVAudioUnitDistortionPresetMultiEchoTight2 AVAudioUnitDistortionPreset = 15 // AVAudioUnitDistortionPresetMultiEverythingIsBroken: A preset that represents an everything-is-broken distortion. AVAudioUnitDistortionPresetMultiEverythingIsBroken AVAudioUnitDistortionPreset = 16 // AVAudioUnitDistortionPresetSpeechAlienChatter: A preset that represents an alien chatter distortion. AVAudioUnitDistortionPresetSpeechAlienChatter AVAudioUnitDistortionPreset = 17 // AVAudioUnitDistortionPresetSpeechCosmicInterference: A preset that represents a cosmic interference distortion. AVAudioUnitDistortionPresetSpeechCosmicInterference AVAudioUnitDistortionPreset = 18 // AVAudioUnitDistortionPresetSpeechGoldenPi: A preset that represents a golden pi distortion. AVAudioUnitDistortionPresetSpeechGoldenPi AVAudioUnitDistortionPreset = 19 // AVAudioUnitDistortionPresetSpeechRadioTower: A preset that represents a radio tower distortion. AVAudioUnitDistortionPresetSpeechRadioTower AVAudioUnitDistortionPreset = 20 // AVAudioUnitDistortionPresetSpeechWaves: A preset that represents a speech wave distortion. AVAudioUnitDistortionPresetSpeechWaves AVAudioUnitDistortionPreset = 21 )
func (AVAudioUnitDistortionPreset) String ¶
func (e AVAudioUnitDistortionPreset) String() string
type AVAudioUnitEQ ¶
type AVAudioUnitEQ struct {
AVAudioUnitEffect
}
An object that implements a multiband equalizer.
Overview ¶
The AVAudioUnitEQFilterParameters class encapsulates the filter parameters that the AVAudioUnitEQ.Bands property array returns.
Creating an equalizer ¶
- AVAudioUnitEQ.InitWithNumberOfBands: Creates an audio unit equalizer object with the specified number of bands.
Getting and setting the equalizer values ¶
- AVAudioUnitEQ.Bands: An array of equalizer filter parameters.
- AVAudioUnitEQ.GlobalGain: The overall gain adjustment that the audio unit applies to the signal, in decibels.
- AVAudioUnitEQ.SetGlobalGain
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEQ
func AVAudioUnitEQFromID ¶
func AVAudioUnitEQFromID(id objc.ID) AVAudioUnitEQ
AVAudioUnitEQFromID constructs a AVAudioUnitEQ from an objc.ID.
An object that implements a multiband equalizer.
func NewAVAudioUnitEQ ¶
func NewAVAudioUnitEQ() AVAudioUnitEQ
NewAVAudioUnitEQ creates a new AVAudioUnitEQ instance.
func NewAudioUnitEQWithAudioComponentDescription ¶
func NewAudioUnitEQWithAudioComponentDescription(audioComponentDescription objectivec.IObject) AVAudioUnitEQ
Creates an audio unit effect object with the specified description.
audioComponentDescription: The description of the audio unit to create.
The `audioComponentDescription` must be one of these types `kAudioUnitType_Effect`, `kAudioUnitType_MusicEffect`, `kAudioUnitType_Panner`, `kAudioUnitType_RemoteEffect`, or `kAudioUnitType_RemoteMusicEffect`.
Return Value ¶
A new AVAudioUnitEffect instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEffect/init(audioComponentDescription:) audioComponentDescription is a [audiotoolbox.AudioComponentDescription].
func NewAudioUnitEQWithNumberOfBands ¶
func NewAudioUnitEQWithNumberOfBands(numberOfBands uint) AVAudioUnitEQ
Creates an audio unit equalizer object with the specified number of bands.
numberOfBands: The number of bands that the equalizer creates.
Return Value ¶
A new AVAudioUnitEQ instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEQ/init(numberOfBands:)
func (AVAudioUnitEQ) Autorelease ¶
func (a AVAudioUnitEQ) Autorelease() AVAudioUnitEQ
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioUnitEQ) Bands ¶
func (a AVAudioUnitEQ) Bands() []AVAudioUnitEQFilterParameters
An array of equalizer filter parameters.
Discussion ¶
The number of elements in the array is equal to the number of bands.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEQ/bands
func (AVAudioUnitEQ) GlobalGain ¶
func (a AVAudioUnitEQ) GlobalGain() float32
The overall gain adjustment that the audio unit applies to the signal, in decibels.
Discussion ¶
The default value is `0 db`. The valid range of values is `-96 db` to `24 db`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEQ/globalGain
func (AVAudioUnitEQ) Init ¶
func (a AVAudioUnitEQ) Init() AVAudioUnitEQ
Init initializes the instance.
func (AVAudioUnitEQ) InitWithNumberOfBands ¶
func (a AVAudioUnitEQ) InitWithNumberOfBands(numberOfBands uint) AVAudioUnitEQ
Creates an audio unit equalizer object with the specified number of bands.
numberOfBands: The number of bands that the equalizer creates.
Return Value ¶
A new AVAudioUnitEQ instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEQ/init(numberOfBands:)
func (AVAudioUnitEQ) SetGlobalGain ¶
func (a AVAudioUnitEQ) SetGlobalGain(value float32)
type AVAudioUnitEQClass ¶
type AVAudioUnitEQClass struct {
// contains filtered or unexported fields
}
func GetAVAudioUnitEQClass ¶
func GetAVAudioUnitEQClass() AVAudioUnitEQClass
GetAVAudioUnitEQClass returns the class object for AVAudioUnitEQ.
func (AVAudioUnitEQClass) Alloc ¶
func (ac AVAudioUnitEQClass) Alloc() AVAudioUnitEQ
Alloc allocates memory for a new instance of the class.
func (AVAudioUnitEQClass) Class ¶
func (ac AVAudioUnitEQClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioUnitEQFilterParameters ¶
type AVAudioUnitEQFilterParameters struct {
objectivec.Object
}
An object that encapsulates the parameters that the equalizer uses.
Overview ¶
Getting and Setting Equalizer Filter Parameters ¶
- AVAudioUnitEQFilterParameters.Bandwidth: The bandwidth of the equalizer filter, in octaves.
- AVAudioUnitEQFilterParameters.SetBandwidth
- AVAudioUnitEQFilterParameters.Bypass: The bypass state of the equalizer filter band.
- AVAudioUnitEQFilterParameters.SetBypass
- AVAudioUnitEQFilterParameters.FilterType: The equalizer filter type.
- AVAudioUnitEQFilterParameters.SetFilterType
- AVAudioUnitEQFilterParameters.Frequency: The frequency of the equalizer filter, in hertz.
- AVAudioUnitEQFilterParameters.SetFrequency
- AVAudioUnitEQFilterParameters.Gain: The gain of the equalizer filter, in decibels.
- AVAudioUnitEQFilterParameters.SetGain
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEQFilterParameters
func AVAudioUnitEQFilterParametersFromID ¶
func AVAudioUnitEQFilterParametersFromID(id objc.ID) AVAudioUnitEQFilterParameters
AVAudioUnitEQFilterParametersFromID constructs a AVAudioUnitEQFilterParameters from an objc.ID.
An object that encapsulates the parameters that the equalizer uses.
func NewAVAudioUnitEQFilterParameters ¶
func NewAVAudioUnitEQFilterParameters() AVAudioUnitEQFilterParameters
NewAVAudioUnitEQFilterParameters creates a new AVAudioUnitEQFilterParameters instance.
func (AVAudioUnitEQFilterParameters) Autorelease ¶
func (a AVAudioUnitEQFilterParameters) Autorelease() AVAudioUnitEQFilterParameters
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioUnitEQFilterParameters) Bands ¶
func (a AVAudioUnitEQFilterParameters) Bands() IAVAudioUnitEQFilterParameters
An array of equalizer filter parameters.
See: https://developer.apple.com/documentation/avfaudio/avaudiouniteq/bands
func (AVAudioUnitEQFilterParameters) Bandwidth ¶
func (a AVAudioUnitEQFilterParameters) Bandwidth() float32
The bandwidth of the equalizer filter, in octaves.
Discussion ¶
The value range of values is `0.05` to `5.0` octaves.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEQFilterParameters/bandwidth
func (AVAudioUnitEQFilterParameters) Bypass ¶
func (a AVAudioUnitEQFilterParameters) Bypass() bool
The bypass state of the equalizer filter band.
Discussion ¶
true if the bypass is active; otherwise, false.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEQFilterParameters/bypass
func (AVAudioUnitEQFilterParameters) FilterType ¶
func (a AVAudioUnitEQFilterParameters) FilterType() AVAudioUnitEQFilterType
The equalizer filter type.
Discussion ¶
The default value is [AudioUnitEQFilterTypeParametric].
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEQFilterParameters/filterType
func (AVAudioUnitEQFilterParameters) Frequency ¶
func (a AVAudioUnitEQFilterParameters) Frequency() float32
The frequency of the equalizer filter, in hertz.
Discussion ¶
The valid range of values is `20 Hz` through `(SampleRate/2)`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEQFilterParameters/frequency
func (AVAudioUnitEQFilterParameters) Gain ¶
func (a AVAudioUnitEQFilterParameters) Gain() float32
The gain of the equalizer filter, in decibels.
Discussion ¶
The default value is `0 dB`. The valid range of values is `-96 dB` through `24 dB`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEQFilterParameters/gain
func (AVAudioUnitEQFilterParameters) GlobalGain ¶
func (a AVAudioUnitEQFilterParameters) GlobalGain() float32
The overall gain adjustment that the audio unit applies to the signal, in decibels.
See: https://developer.apple.com/documentation/avfaudio/avaudiouniteq/globalgain
func (AVAudioUnitEQFilterParameters) Init ¶
func (a AVAudioUnitEQFilterParameters) Init() AVAudioUnitEQFilterParameters
Init initializes the instance.
func (AVAudioUnitEQFilterParameters) SetBands ¶
func (a AVAudioUnitEQFilterParameters) SetBands(value IAVAudioUnitEQFilterParameters)
func (AVAudioUnitEQFilterParameters) SetBandwidth ¶
func (a AVAudioUnitEQFilterParameters) SetBandwidth(value float32)
func (AVAudioUnitEQFilterParameters) SetBypass ¶
func (a AVAudioUnitEQFilterParameters) SetBypass(value bool)
func (AVAudioUnitEQFilterParameters) SetFilterType ¶
func (a AVAudioUnitEQFilterParameters) SetFilterType(value AVAudioUnitEQFilterType)
func (AVAudioUnitEQFilterParameters) SetFrequency ¶
func (a AVAudioUnitEQFilterParameters) SetFrequency(value float32)
func (AVAudioUnitEQFilterParameters) SetGain ¶
func (a AVAudioUnitEQFilterParameters) SetGain(value float32)
func (AVAudioUnitEQFilterParameters) SetGlobalGain ¶
func (a AVAudioUnitEQFilterParameters) SetGlobalGain(value float32)
type AVAudioUnitEQFilterParametersClass ¶
type AVAudioUnitEQFilterParametersClass struct {
// contains filtered or unexported fields
}
func GetAVAudioUnitEQFilterParametersClass ¶
func GetAVAudioUnitEQFilterParametersClass() AVAudioUnitEQFilterParametersClass
GetAVAudioUnitEQFilterParametersClass returns the class object for AVAudioUnitEQFilterParameters.
func (AVAudioUnitEQFilterParametersClass) Alloc ¶
func (ac AVAudioUnitEQFilterParametersClass) Alloc() AVAudioUnitEQFilterParameters
Alloc allocates memory for a new instance of the class.
func (AVAudioUnitEQFilterParametersClass) Class ¶
func (ac AVAudioUnitEQFilterParametersClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioUnitEQFilterType ¶
type AVAudioUnitEQFilterType int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEQFilterType
const ( // AVAudioUnitEQFilterTypeBandPass: A type that represents a bandpass filter. AVAudioUnitEQFilterTypeBandPass AVAudioUnitEQFilterType = 5 // AVAudioUnitEQFilterTypeBandStop: A type that represents a band-stop filter, also known as a notch filter. AVAudioUnitEQFilterTypeBandStop AVAudioUnitEQFilterType = 6 // AVAudioUnitEQFilterTypeHighPass: A type that represents a simple Butterworth second-order high-pass filter. AVAudioUnitEQFilterTypeHighPass AVAudioUnitEQFilterType = 2 // AVAudioUnitEQFilterTypeHighShelf: A type that represents a high-shelf filter. AVAudioUnitEQFilterTypeHighShelf AVAudioUnitEQFilterType = 8 // AVAudioUnitEQFilterTypeLowPass: A type that represents a simple Butterworth second-order low-pass filter. AVAudioUnitEQFilterTypeLowPass AVAudioUnitEQFilterType = 1 // AVAudioUnitEQFilterTypeLowShelf: A type that represents a low-shelf filter. AVAudioUnitEQFilterTypeLowShelf AVAudioUnitEQFilterType = 7 // AVAudioUnitEQFilterTypeParametric: A type that represents a parametric filter that derives from a Butterworth analog prototype. AVAudioUnitEQFilterTypeParametric AVAudioUnitEQFilterType = 0 // AVAudioUnitEQFilterTypeResonantHighPass: A type that represents a high-pass filter with resonance support using the bandwidth parameter. AVAudioUnitEQFilterTypeResonantHighPass AVAudioUnitEQFilterType = 4 // AVAudioUnitEQFilterTypeResonantHighShelf: A type that represents a high-shelf filter with resonance support using the bandwidth parameter. AVAudioUnitEQFilterTypeResonantHighShelf AVAudioUnitEQFilterType = 10 // AVAudioUnitEQFilterTypeResonantLowPass: A type that represents a low-pass filter with resonance support using the bandwidth parameter. AVAudioUnitEQFilterTypeResonantLowPass AVAudioUnitEQFilterType = 3 // AVAudioUnitEQFilterTypeResonantLowShelf: A type that represents a low-shelf filter with resonance support using the bandwidth parameter. AVAudioUnitEQFilterTypeResonantLowShelf AVAudioUnitEQFilterType = 9 )
func (AVAudioUnitEQFilterType) String ¶
func (e AVAudioUnitEQFilterType) String() string
type AVAudioUnitEffect ¶
type AVAudioUnitEffect struct {
AVAudioUnit
}
An object that processes audio in real time.
Overview ¶
This processing uses AudioUnit of type effect, music effect, panner, remote effect, or remote music effect. These effects run in real time and process some number of audio input samples to produce several audio output samples. A delay unit is an example of an effect unit.
Creating an audio effect ¶
- AVAudioUnitEffect.InitWithAudioComponentDescription: Creates an audio unit effect object with the specified description.
Getting the bypass state ¶
- AVAudioUnitEffect.Bypass: The bypass state of the audio unit.
- AVAudioUnitEffect.SetBypass
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEffect
func AVAudioUnitEffectFromID ¶
func AVAudioUnitEffectFromID(id objc.ID) AVAudioUnitEffect
AVAudioUnitEffectFromID constructs a AVAudioUnitEffect from an objc.ID.
An object that processes audio in real time.
func NewAVAudioUnitEffect ¶
func NewAVAudioUnitEffect() AVAudioUnitEffect
NewAVAudioUnitEffect creates a new AVAudioUnitEffect instance.
func NewAudioUnitEffectWithAudioComponentDescription ¶
func NewAudioUnitEffectWithAudioComponentDescription(audioComponentDescription objectivec.IObject) AVAudioUnitEffect
Creates an audio unit effect object with the specified description.
audioComponentDescription: The description of the audio unit to create.
The `audioComponentDescription` must be one of these types `kAudioUnitType_Effect`, `kAudioUnitType_MusicEffect`, `kAudioUnitType_Panner`, `kAudioUnitType_RemoteEffect`, or `kAudioUnitType_RemoteMusicEffect`.
Return Value ¶
A new AVAudioUnitEffect instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEffect/init(audioComponentDescription:) audioComponentDescription is a [audiotoolbox.AudioComponentDescription].
func (AVAudioUnitEffect) Autorelease ¶
func (a AVAudioUnitEffect) Autorelease() AVAudioUnitEffect
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioUnitEffect) Bypass ¶
func (a AVAudioUnitEffect) Bypass() bool
The bypass state of the audio unit.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEffect/bypass
func (AVAudioUnitEffect) Init ¶
func (a AVAudioUnitEffect) Init() AVAudioUnitEffect
Init initializes the instance.
func (AVAudioUnitEffect) InitWithAudioComponentDescription ¶
func (a AVAudioUnitEffect) InitWithAudioComponentDescription(audioComponentDescription objectivec.IObject) AVAudioUnitEffect
Creates an audio unit effect object with the specified description.
audioComponentDescription: The description of the audio unit to create.
The `audioComponentDescription` must be one of these types `kAudioUnitType_Effect`, `kAudioUnitType_MusicEffect`, `kAudioUnitType_Panner`, `kAudioUnitType_RemoteEffect`, or `kAudioUnitType_RemoteMusicEffect`.
audioComponentDescription is a [audiotoolbox.AudioComponentDescription].
Return Value ¶
A new AVAudioUnitEffect instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEffect/init(audioComponentDescription:) audioComponentDescription is a [audiotoolbox.AudioComponentDescription].
func (AVAudioUnitEffect) SetBypass ¶
func (a AVAudioUnitEffect) SetBypass(value bool)
type AVAudioUnitEffectClass ¶
type AVAudioUnitEffectClass struct {
// contains filtered or unexported fields
}
func GetAVAudioUnitEffectClass ¶
func GetAVAudioUnitEffectClass() AVAudioUnitEffectClass
GetAVAudioUnitEffectClass returns the class object for AVAudioUnitEffect.
func (AVAudioUnitEffectClass) Alloc ¶
func (ac AVAudioUnitEffectClass) Alloc() AVAudioUnitEffect
Alloc allocates memory for a new instance of the class.
func (AVAudioUnitEffectClass) Class ¶
func (ac AVAudioUnitEffectClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioUnitErrorHandler ¶
type AVAudioUnitErrorHandler = func(*AVAudioUnit, error)
AVAudioUnitErrorHandler handles A handler the framework calls in an arbitrary thread context when creation completes. The error can be type-asserted to *foundation.NSError for Domain, Code, and UserInfo.
Used by:
- [AVAudioUnit.InstantiateWithComponentDescriptionOptionsCompletionHandler]
type AVAudioUnitGenerator ¶
type AVAudioUnitGenerator struct {
AVAudioUnit
}
An object that generates audio output.
Overview ¶
A generator represents an AudioUnit of type `kAudioUnitType_Generator` or `kAudioUnitType_RemoteGenerator`. A generator has no audio input, but produces audio output. An example is a tone generator.
Creating an audio unit generator ¶
- AVAudioUnitGenerator.InitWithAudioComponentDescription: Creates a generator audio unit with the specified description.
Getting and setting the bypass status ¶
- AVAudioUnitGenerator.Bypass: The bypass state of the audio unit.
- AVAudioUnitGenerator.SetBypass
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitGenerator
func AVAudioUnitGeneratorFromID ¶
func AVAudioUnitGeneratorFromID(id objc.ID) AVAudioUnitGenerator
AVAudioUnitGeneratorFromID constructs a AVAudioUnitGenerator from an objc.ID.
An object that generates audio output.
func NewAVAudioUnitGenerator ¶
func NewAVAudioUnitGenerator() AVAudioUnitGenerator
NewAVAudioUnitGenerator creates a new AVAudioUnitGenerator instance.
func NewAudioUnitGeneratorWithAudioComponentDescription ¶
func NewAudioUnitGeneratorWithAudioComponentDescription(audioComponentDescription objectivec.IObject) AVAudioUnitGenerator
Creates a generator audio unit with the specified description.
audioComponentDescription: The audio component description.
Return Value ¶
A new AVAudioUnitGenerator instance.
Discussion ¶
The AudioComponentDescription structure `componentType` field must be `kAudioUnitType_Generator` or kAudioUnitType_RemoteGenerator.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitGenerator/init(audioComponentDescription:) audioComponentDescription is a [audiotoolbox.AudioComponentDescription].
func (AVAudioUnitGenerator) Autorelease ¶
func (a AVAudioUnitGenerator) Autorelease() AVAudioUnitGenerator
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioUnitGenerator) Bypass ¶
func (a AVAudioUnitGenerator) Bypass() bool
The bypass state of the audio unit.
Discussion ¶
If true, the audio unit bypasses audio processing.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitGenerator/bypass
func (AVAudioUnitGenerator) DestinationForMixerBus ¶
func (a AVAudioUnitGenerator) DestinationForMixerBus(mixer IAVAudioNode, bus AVAudioNodeBus) IAVAudioMixingDestination
Gets the audio mixing destination object that corresponds to the specified mixer node and input bus.
mixer: The mixer to get destination details for.
bus: The input bus.
Return Value ¶
Returns `self` if the specified mixer or input bus matches its connection point. If the mixer or input bus doesn’t match its connection point, or if the source node isn’t in a connected state to the mixer or input bus, the method returns `nil.`
Discussion ¶
When you connect a source node to multiple mixers downstream, setting AVAudioMixing properties directly on the source node applies the change to all of them. Use this method to get the corresponding AVAudioMixingDestination for a specific mixer. Properties set on individual destination instances don’t reflect at the source node level.
If there’s any disconnection between the source and mixer nodes, the return value can be invalid. Fetch the return value every time you want to set or get properties on a specific mixer.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing/destination(forMixer:bus:)
func (AVAudioUnitGenerator) Init ¶
func (a AVAudioUnitGenerator) Init() AVAudioUnitGenerator
Init initializes the instance.
func (AVAudioUnitGenerator) InitWithAudioComponentDescription ¶
func (a AVAudioUnitGenerator) InitWithAudioComponentDescription(audioComponentDescription objectivec.IObject) AVAudioUnitGenerator
Creates a generator audio unit with the specified description.
audioComponentDescription: The audio component description.
audioComponentDescription is a [audiotoolbox.AudioComponentDescription].
Return Value ¶
A new AVAudioUnitGenerator instance.
Discussion ¶
The AudioComponentDescription structure `componentType` field must be `kAudioUnitType_Generator` or kAudioUnitType_RemoteGenerator.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitGenerator/init(audioComponentDescription:) audioComponentDescription is a [audiotoolbox.AudioComponentDescription].
func (AVAudioUnitGenerator) Obstruction ¶
func (a AVAudioUnitGenerator) Obstruction() float32
A value that simulates filtering of the direct path of sound due to an obstacle.
Discussion ¶
The value of `obstruction` is in decibels. The system blocks only the direct path of sound between the source and listener.
The default value is `0.0`, and the range of valid values is `-100` to `0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/obstruction
func (AVAudioUnitGenerator) Occlusion ¶
func (a AVAudioUnitGenerator) Occlusion() float32
A value that simulates filtering of the direct and reverb paths of sound due to an obstacle.
Discussion ¶
The value of `obstruction` is in decibels. The system blocks the direct and reverb paths of sound between the source and listener.
The default value is `0.0`, and the range of valid values is `-100` to `0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/occlusion
func (AVAudioUnitGenerator) Pan ¶
func (a AVAudioUnitGenerator) Pan() float32
The bus’s stereo pan.
Discussion ¶
The default value is `0.0`, and the range of valid values is `-1.0` to `1.0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioStereoMixing/pan
func (AVAudioUnitGenerator) PointSourceInHeadMode ¶
func (a AVAudioUnitGenerator) PointSourceInHeadMode() AVAudio3DMixingPointSourceInHeadMode
The in-head mode for a point source.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/pointSourceInHeadMode
func (AVAudioUnitGenerator) Position ¶
func (a AVAudioUnitGenerator) Position() AVAudio3DPoint
The location of the source in the 3D environment.
Discussion ¶
The system specifies the coordinates in meters. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/position
func (AVAudioUnitGenerator) Rate ¶
func (a AVAudioUnitGenerator) Rate() float32
A value that changes the playback rate of the input signal.
Discussion ¶
A value of `2.0` results in the output audio playing one octave higher. A value of `0.5` results in the output audio playing one octave lower.
The default value is `1.0`, and the range of valid values is `0.5` to `2.0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/rate
func (AVAudioUnitGenerator) RenderingAlgorithm ¶
func (a AVAudioUnitGenerator) RenderingAlgorithm() AVAudio3DMixingRenderingAlgorithm
The type of rendering algorithm the mixer uses.
Discussion ¶
Depending on the current output format of the AVAudioEnvironmentNode instance, the system may only support a subset of the rendering algorithms. You can retrieve an array of valid rendering algorithms by calling the [ApplicableRenderingAlgorithms] function of the AVAudioEnvironmentNode instance.
The default rendering algorithm is [Audio3DMixingRenderingAlgorithmEqualPowerPanning]. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/renderingAlgorithm
func (AVAudioUnitGenerator) ReverbBlend ¶
func (a AVAudioUnitGenerator) ReverbBlend() float32
A value that controls the blend of dry and reverb processed audio.
Discussion ¶
This property controls the amount of the source’s audio that the AVAudioEnvironmentNode instance processes. A value of `0.5` results in an equal blend of dry and processed (wet) audio.
The default is `0.0`, and the range of valid values is `0.0` (completely dry) to `1.0` (completely wet). Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/reverbBlend
func (AVAudioUnitGenerator) SetBypass ¶
func (a AVAudioUnitGenerator) SetBypass(value bool)
func (AVAudioUnitGenerator) SetObstruction ¶
func (a AVAudioUnitGenerator) SetObstruction(value float32)
func (AVAudioUnitGenerator) SetOcclusion ¶
func (a AVAudioUnitGenerator) SetOcclusion(value float32)
func (AVAudioUnitGenerator) SetPan ¶
func (a AVAudioUnitGenerator) SetPan(value float32)
func (AVAudioUnitGenerator) SetPointSourceInHeadMode ¶
func (a AVAudioUnitGenerator) SetPointSourceInHeadMode(value AVAudio3DMixingPointSourceInHeadMode)
func (AVAudioUnitGenerator) SetPosition ¶
func (a AVAudioUnitGenerator) SetPosition(value AVAudio3DPoint)
func (AVAudioUnitGenerator) SetRate ¶
func (a AVAudioUnitGenerator) SetRate(value float32)
func (AVAudioUnitGenerator) SetRenderingAlgorithm ¶
func (a AVAudioUnitGenerator) SetRenderingAlgorithm(value AVAudio3DMixingRenderingAlgorithm)
func (AVAudioUnitGenerator) SetReverbBlend ¶
func (a AVAudioUnitGenerator) SetReverbBlend(value float32)
func (AVAudioUnitGenerator) SetSourceMode ¶
func (a AVAudioUnitGenerator) SetSourceMode(value AVAudio3DMixingSourceMode)
func (AVAudioUnitGenerator) SetVolume ¶
func (a AVAudioUnitGenerator) SetVolume(value float32)
func (AVAudioUnitGenerator) SourceMode ¶
func (a AVAudioUnitGenerator) SourceMode() AVAudio3DMixingSourceMode
The source mode for the input bus of the audio environment node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/sourceMode
func (AVAudioUnitGenerator) Volume ¶
func (a AVAudioUnitGenerator) Volume() float32
The bus’s input volume.
Discussion ¶
The default value is `1.0`, and the range of valid values is `0.0` to `1.0`. Only the AVAudioEnvironmentNode and the AVAudioMixerNode implement this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing/volume
type AVAudioUnitGeneratorClass ¶
type AVAudioUnitGeneratorClass struct {
// contains filtered or unexported fields
}
func GetAVAudioUnitGeneratorClass ¶
func GetAVAudioUnitGeneratorClass() AVAudioUnitGeneratorClass
GetAVAudioUnitGeneratorClass returns the class object for AVAudioUnitGenerator.
func (AVAudioUnitGeneratorClass) Alloc ¶
func (ac AVAudioUnitGeneratorClass) Alloc() AVAudioUnitGenerator
Alloc allocates memory for a new instance of the class.
func (AVAudioUnitGeneratorClass) Class ¶
func (ac AVAudioUnitGeneratorClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioUnitMIDIInstrument ¶
type AVAudioUnitMIDIInstrument struct {
AVAudioUnit
}
An object that represents music devices or remote instruments.
Overview ¶
Use an AVAudioUnitMIDIInstrument in a chain that processes real-time (live) input and has the general concept of music events; for example, notes.
Creating a MIDI instrument ¶
- AVAudioUnitMIDIInstrument.InitWithAudioComponentDescription: Creates a MIDI instrument audio unit with the component description you specify.
Sending information to the MIDI instrument ¶
- AVAudioUnitMIDIInstrument.SendControllerWithValueOnChannel: Sends a MIDI controller event to the instrument.
- AVAudioUnitMIDIInstrument.SendMIDIEventData1: Sends a MIDI event which contains one data byte to the instrument.
- AVAudioUnitMIDIInstrument.SendMIDIEventData1Data2: Sends a MIDI event which contains two data bytes to the instrument.
- AVAudioUnitMIDIInstrument.SendMIDISysExEvent: Sends a MIDI System Exclusive event to the instrument.
- AVAudioUnitMIDIInstrument.SendPitchBendOnChannel: Sends a MIDI Pitch Bend event to the instrument.
- AVAudioUnitMIDIInstrument.SendPressureOnChannel: Sends a MIDI channel pressure event to the instrument.
- AVAudioUnitMIDIInstrument.SendPressureForKeyWithValueOnChannel: Sends a MIDI Polyphonic key pressure event to the instrument.
- AVAudioUnitMIDIInstrument.SendProgramChangeOnChannel: Sends MIDI Program Change and Bank Select events to the instrument.
- AVAudioUnitMIDIInstrument.SendProgramChangeBankMSBBankLSBOnChannel: Sends MIDI Program Change and Bank Select events to the instrument.
- AVAudioUnitMIDIInstrument.SendMIDIEventList: Sends a MIDI event list to the instrument.
Starting and stopping play ¶
- AVAudioUnitMIDIInstrument.StartNoteWithVelocityOnChannel: Sends a MIDI Note On event to the instrument.
- AVAudioUnitMIDIInstrument.StopNoteOnChannel: Sends a MIDI Note Off event to the instrument.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitMIDIInstrument
func AVAudioUnitMIDIInstrumentFromID ¶
func AVAudioUnitMIDIInstrumentFromID(id objc.ID) AVAudioUnitMIDIInstrument
AVAudioUnitMIDIInstrumentFromID constructs a AVAudioUnitMIDIInstrument from an objc.ID.
An object that represents music devices or remote instruments.
func NewAVAudioUnitMIDIInstrument ¶
func NewAVAudioUnitMIDIInstrument() AVAudioUnitMIDIInstrument
NewAVAudioUnitMIDIInstrument creates a new AVAudioUnitMIDIInstrument instance.
func NewAudioUnitMIDIInstrumentWithAudioComponentDescription ¶
func NewAudioUnitMIDIInstrumentWithAudioComponentDescription(description objectivec.IObject) AVAudioUnitMIDIInstrument
Creates a MIDI instrument audio unit with the component description you specify.
description: The description of the audio component.
Return Value ¶
A new AVAudioUnitMIDIInstrument instance.
Discussion ¶
The component type must be `kAudioUnitType_MusicDevice` or `kAudioUnitType_RemoteInstrument`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitMIDIInstrument/init(audioComponentDescription:) description is a [audiotoolbox.AudioComponentDescription].
func (AVAudioUnitMIDIInstrument) Autorelease ¶
func (a AVAudioUnitMIDIInstrument) Autorelease() AVAudioUnitMIDIInstrument
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioUnitMIDIInstrument) DestinationForMixerBus ¶
func (a AVAudioUnitMIDIInstrument) DestinationForMixerBus(mixer IAVAudioNode, bus AVAudioNodeBus) IAVAudioMixingDestination
Gets the audio mixing destination object that corresponds to the specified mixer node and input bus.
mixer: The mixer to get destination details for.
bus: The input bus.
Return Value ¶
Returns `self` if the specified mixer or input bus matches its connection point. If the mixer or input bus doesn’t match its connection point, or if the source node isn’t in a connected state to the mixer or input bus, the method returns `nil.`
Discussion ¶
When you connect a source node to multiple mixers downstream, setting AVAudioMixing properties directly on the source node applies the change to all of them. Use this method to get the corresponding AVAudioMixingDestination for a specific mixer. Properties set on individual destination instances don’t reflect at the source node level.
If there’s any disconnection between the source and mixer nodes, the return value can be invalid. Fetch the return value every time you want to set or get properties on a specific mixer.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing/destination(forMixer:bus:)
func (AVAudioUnitMIDIInstrument) Init ¶
func (a AVAudioUnitMIDIInstrument) Init() AVAudioUnitMIDIInstrument
Init initializes the instance.
func (AVAudioUnitMIDIInstrument) InitWithAudioComponentDescription ¶
func (a AVAudioUnitMIDIInstrument) InitWithAudioComponentDescription(description objectivec.IObject) AVAudioUnitMIDIInstrument
Creates a MIDI instrument audio unit with the component description you specify.
description: The description of the audio component.
description is a [audiotoolbox.AudioComponentDescription].
Return Value ¶
A new AVAudioUnitMIDIInstrument instance.
Discussion ¶
The component type must be `kAudioUnitType_MusicDevice` or `kAudioUnitType_RemoteInstrument`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitMIDIInstrument/init(audioComponentDescription:) description is a [audiotoolbox.AudioComponentDescription].
func (AVAudioUnitMIDIInstrument) Obstruction ¶
func (a AVAudioUnitMIDIInstrument) Obstruction() float32
A value that simulates filtering of the direct path of sound due to an obstacle.
Discussion ¶
The value of `obstruction` is in decibels. The system blocks only the direct path of sound between the source and listener.
The default value is `0.0`, and the range of valid values is `-100` to `0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/obstruction
func (AVAudioUnitMIDIInstrument) Occlusion ¶
func (a AVAudioUnitMIDIInstrument) Occlusion() float32
A value that simulates filtering of the direct and reverb paths of sound due to an obstacle.
Discussion ¶
The value of `obstruction` is in decibels. The system blocks the direct and reverb paths of sound between the source and listener.
The default value is `0.0`, and the range of valid values is `-100` to `0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/occlusion
func (AVAudioUnitMIDIInstrument) Pan ¶
func (a AVAudioUnitMIDIInstrument) Pan() float32
The bus’s stereo pan.
Discussion ¶
The default value is `0.0`, and the range of valid values is `-1.0` to `1.0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioStereoMixing/pan
func (AVAudioUnitMIDIInstrument) PointSourceInHeadMode ¶
func (a AVAudioUnitMIDIInstrument) PointSourceInHeadMode() AVAudio3DMixingPointSourceInHeadMode
The in-head mode for a point source.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/pointSourceInHeadMode
func (AVAudioUnitMIDIInstrument) Position ¶
func (a AVAudioUnitMIDIInstrument) Position() AVAudio3DPoint
The location of the source in the 3D environment.
Discussion ¶
The system specifies the coordinates in meters. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/position
func (AVAudioUnitMIDIInstrument) Rate ¶
func (a AVAudioUnitMIDIInstrument) Rate() float32
A value that changes the playback rate of the input signal.
Discussion ¶
A value of `2.0` results in the output audio playing one octave higher. A value of `0.5` results in the output audio playing one octave lower.
The default value is `1.0`, and the range of valid values is `0.5` to `2.0`. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/rate
func (AVAudioUnitMIDIInstrument) RenderingAlgorithm ¶
func (a AVAudioUnitMIDIInstrument) RenderingAlgorithm() AVAudio3DMixingRenderingAlgorithm
The type of rendering algorithm the mixer uses.
Discussion ¶
Depending on the current output format of the AVAudioEnvironmentNode instance, the system may only support a subset of the rendering algorithms. You can retrieve an array of valid rendering algorithms by calling the [ApplicableRenderingAlgorithms] function of the AVAudioEnvironmentNode instance.
The default rendering algorithm is [Audio3DMixingRenderingAlgorithmEqualPowerPanning]. Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/renderingAlgorithm
func (AVAudioUnitMIDIInstrument) ReverbBlend ¶
func (a AVAudioUnitMIDIInstrument) ReverbBlend() float32
A value that controls the blend of dry and reverb processed audio.
Discussion ¶
This property controls the amount of the source’s audio that the AVAudioEnvironmentNode instance processes. A value of `0.5` results in an equal blend of dry and processed (wet) audio.
The default is `0.0`, and the range of valid values is `0.0` (completely dry) to `1.0` (completely wet). Only the AVAudioEnvironmentNode class implements this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/reverbBlend
func (AVAudioUnitMIDIInstrument) SendControllerWithValueOnChannel ¶
func (a AVAudioUnitMIDIInstrument) SendControllerWithValueOnChannel(controller uint8, value uint8, channel uint8)
Sends a MIDI controller event to the instrument.
controller: Specifies a standard MIDI controller number. The valid range is `0` to `127`.
value: Value for the controller. The valid range is `0` to `127`.
channel: The channel number to send the event to. The valid range is `0` to `15`.
func (AVAudioUnitMIDIInstrument) SendMIDIEventData1 ¶
func (a AVAudioUnitMIDIInstrument) SendMIDIEventData1(midiStatus uint8, data1 uint8)
Sends a MIDI event which contains one data byte to the instrument.
midiStatus: The status value of the MIDI event.
data1: The data byte of the MIDI event.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitMIDIInstrument/sendMIDIEvent(_:data1:)
func (AVAudioUnitMIDIInstrument) SendMIDIEventData1Data2 ¶
func (a AVAudioUnitMIDIInstrument) SendMIDIEventData1Data2(midiStatus uint8, data1 uint8, data2 uint8)
Sends a MIDI event which contains two data bytes to the instrument.
midiStatus: The status value of the MIDI event.
data1: The first data byte of the MIDI event.
data2: The first data byte of the MIDI event.
func (AVAudioUnitMIDIInstrument) SendMIDIEventList ¶
func (a AVAudioUnitMIDIInstrument) SendMIDIEventList(eventList objectivec.IObject)
Sends a MIDI event list to the instrument.
eventList: The MIDI event list.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitMIDIInstrument/send(_:)
func (AVAudioUnitMIDIInstrument) SendMIDISysExEvent ¶
func (a AVAudioUnitMIDIInstrument) SendMIDISysExEvent(midiData foundation.INSData)
Sends a MIDI System Exclusive event to the instrument.
midiData: The system exclusive data you want to send to the instrument.
Discussion ¶
The `midiData` parameter should contain the complete [SysEx] data, including start (F0) and termination (F7) bytes.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitMIDIInstrument/sendMIDISysExEvent(_:)
func (AVAudioUnitMIDIInstrument) SendPitchBendOnChannel ¶
func (a AVAudioUnitMIDIInstrument) SendPitchBendOnChannel(pitchbend uint16, channel uint8)
Sends a MIDI Pitch Bend event to the instrument.
pitchbend: Value of the pitchbend. The valid range of values is `0` to `16383`.
channel: The channel number to send the event to. The valid range of values is `0` to `15`.
Discussion ¶
If this method isn’t invoked, then the system uses the default pitch bend value of `8192` (no pitch).
func (AVAudioUnitMIDIInstrument) SendPressureForKeyWithValueOnChannel ¶
func (a AVAudioUnitMIDIInstrument) SendPressureForKeyWithValueOnChannel(key uint8, value uint8, channel uint8)
Sends a MIDI Polyphonic key pressure event to the instrument.
key: The key (note) number to which the pressure event applies. The valid range is `0` to `127`.
value: The value of the pressure. The valid range is `0` to `127`.
channel: The channel number to send the event to. The valid range is `0` to `15`.
func (AVAudioUnitMIDIInstrument) SendPressureOnChannel ¶
func (a AVAudioUnitMIDIInstrument) SendPressureOnChannel(pressure uint8, channel uint8)
Sends a MIDI channel pressure event to the instrument.
pressure: The value of the pressure. The valid range is `0` to `127`.
channel: The channel number to send the event to. The valid range is `0` to `15`.
func (AVAudioUnitMIDIInstrument) SendProgramChangeBankMSBBankLSBOnChannel ¶
func (a AVAudioUnitMIDIInstrument) SendProgramChangeBankMSBBankLSBOnChannel(program uint8, bankMSB uint8, bankLSB uint8, channel uint8)
Sends MIDI Program Change and Bank Select events to the instrument.
program: Specifies the program (preset) number within the bank to load. The valid range is `0` to `127`.
bankMSB: Specifies the most significant byte value for the bank to select. The valid range is `0` to `127`.
bankLSB: Specifies the least significant byte value for the bank to select. The valid range is `0` to `127`.
channel: The channel number to send the event to. The valid range is `0` to `15`.
func (AVAudioUnitMIDIInstrument) SendProgramChangeOnChannel ¶
func (a AVAudioUnitMIDIInstrument) SendProgramChangeOnChannel(program uint8, channel uint8)
Sends MIDI Program Change and Bank Select events to the instrument.
program: The program (preset) number within the bank to load. The valid range is `0` to `127`.
channel: The channel number to send the event to. The valid range is `0` to `15`.
Discussion ¶
The system loads the instrument from the bank that was previously set by the MIDI “Bank Select” controller messages (0 and 31). The system uses bank `0` if not previously set.
func (AVAudioUnitMIDIInstrument) SetObstruction ¶
func (a AVAudioUnitMIDIInstrument) SetObstruction(value float32)
func (AVAudioUnitMIDIInstrument) SetOcclusion ¶
func (a AVAudioUnitMIDIInstrument) SetOcclusion(value float32)
func (AVAudioUnitMIDIInstrument) SetPan ¶
func (a AVAudioUnitMIDIInstrument) SetPan(value float32)
func (AVAudioUnitMIDIInstrument) SetPointSourceInHeadMode ¶
func (a AVAudioUnitMIDIInstrument) SetPointSourceInHeadMode(value AVAudio3DMixingPointSourceInHeadMode)
func (AVAudioUnitMIDIInstrument) SetPosition ¶
func (a AVAudioUnitMIDIInstrument) SetPosition(value AVAudio3DPoint)
func (AVAudioUnitMIDIInstrument) SetRate ¶
func (a AVAudioUnitMIDIInstrument) SetRate(value float32)
func (AVAudioUnitMIDIInstrument) SetRenderingAlgorithm ¶
func (a AVAudioUnitMIDIInstrument) SetRenderingAlgorithm(value AVAudio3DMixingRenderingAlgorithm)
func (AVAudioUnitMIDIInstrument) SetReverbBlend ¶
func (a AVAudioUnitMIDIInstrument) SetReverbBlend(value float32)
func (AVAudioUnitMIDIInstrument) SetSourceMode ¶
func (a AVAudioUnitMIDIInstrument) SetSourceMode(value AVAudio3DMixingSourceMode)
func (AVAudioUnitMIDIInstrument) SetVolume ¶
func (a AVAudioUnitMIDIInstrument) SetVolume(value float32)
func (AVAudioUnitMIDIInstrument) SourceMode ¶
func (a AVAudioUnitMIDIInstrument) SourceMode() AVAudio3DMixingSourceMode
The source mode for the input bus of the audio environment node.
See: https://developer.apple.com/documentation/AVFAudio/AVAudio3DMixing/sourceMode
func (AVAudioUnitMIDIInstrument) StartNoteWithVelocityOnChannel ¶
func (a AVAudioUnitMIDIInstrument) StartNoteWithVelocityOnChannel(note uint8, velocity uint8, channel uint8)
Sends a MIDI Note On event to the instrument.
note: The note number (key) to play. The valid range is `0` to `127`.
velocity: Specifies the volume to play the note at. The valid range is `0` to `127`.
channel: The channel number to send the event to. The valid range is `0` to `15`.
func (AVAudioUnitMIDIInstrument) StopNoteOnChannel ¶
func (a AVAudioUnitMIDIInstrument) StopNoteOnChannel(note uint8, channel uint8)
Sends a MIDI Note Off event to the instrument.
note: The note number (key) to stop. The valid range is `0` to `127`.
channel: The channel number to send the event to. The valid range is `0` to `15`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitMIDIInstrument/stopNote(_:onChannel:)
func (AVAudioUnitMIDIInstrument) Volume ¶
func (a AVAudioUnitMIDIInstrument) Volume() float32
The bus’s input volume.
Discussion ¶
The default value is `1.0`, and the range of valid values is `0.0` to `1.0`. Only the AVAudioEnvironmentNode and the AVAudioMixerNode implement this property.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixing/volume
type AVAudioUnitMIDIInstrumentClass ¶
type AVAudioUnitMIDIInstrumentClass struct {
// contains filtered or unexported fields
}
func GetAVAudioUnitMIDIInstrumentClass ¶
func GetAVAudioUnitMIDIInstrumentClass() AVAudioUnitMIDIInstrumentClass
GetAVAudioUnitMIDIInstrumentClass returns the class object for AVAudioUnitMIDIInstrument.
func (AVAudioUnitMIDIInstrumentClass) Alloc ¶
func (ac AVAudioUnitMIDIInstrumentClass) Alloc() AVAudioUnitMIDIInstrument
Alloc allocates memory for a new instance of the class.
func (AVAudioUnitMIDIInstrumentClass) Class ¶
func (ac AVAudioUnitMIDIInstrumentClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioUnitReverb ¶
type AVAudioUnitReverb struct {
AVAudioUnitEffect
}
An object that implements a reverb effect.
Overview ¶
A reverb simulates the acoustic characteristics of a particular environment. Use the different presets to simulate a particular space and blend it in with the original signal using the AVAudioUnitReverb.WetDryMix property.
Configure the reverb ¶
- AVAudioUnitReverb.LoadFactoryPreset: Configures the audio unit as a reverb preset.
Getting and setting the reverb values ¶
- AVAudioUnitReverb.WetDryMix: The blend of the wet and dry signals.
- AVAudioUnitReverb.SetWetDryMix
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitReverb
func AVAudioUnitReverbFromID ¶
func AVAudioUnitReverbFromID(id objc.ID) AVAudioUnitReverb
AVAudioUnitReverbFromID constructs a AVAudioUnitReverb from an objc.ID.
An object that implements a reverb effect.
func NewAVAudioUnitReverb ¶
func NewAVAudioUnitReverb() AVAudioUnitReverb
NewAVAudioUnitReverb creates a new AVAudioUnitReverb instance.
func NewAudioUnitReverbWithAudioComponentDescription ¶
func NewAudioUnitReverbWithAudioComponentDescription(audioComponentDescription objectivec.IObject) AVAudioUnitReverb
Creates an audio unit effect object with the specified description.
audioComponentDescription: The description of the audio unit to create.
The `audioComponentDescription` must be one of these types `kAudioUnitType_Effect`, `kAudioUnitType_MusicEffect`, `kAudioUnitType_Panner`, `kAudioUnitType_RemoteEffect`, or `kAudioUnitType_RemoteMusicEffect`.
Return Value ¶
A new AVAudioUnitEffect instance.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEffect/init(audioComponentDescription:) audioComponentDescription is a [audiotoolbox.AudioComponentDescription].
func (AVAudioUnitReverb) Autorelease ¶
func (a AVAudioUnitReverb) Autorelease() AVAudioUnitReverb
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioUnitReverb) Init ¶
func (a AVAudioUnitReverb) Init() AVAudioUnitReverb
Init initializes the instance.
func (AVAudioUnitReverb) LoadFactoryPreset ¶
func (a AVAudioUnitReverb) LoadFactoryPreset(preset AVAudioUnitReverbPreset)
Configures the audio unit as a reverb preset.
preset: The reverb preset.
Discussion ¶
For more information about possible values, see AVAudioUnitReverbPreset. The default value is [AudioUnitReverbPresetMediumHall].
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitReverb/loadFactoryPreset(_:)
func (AVAudioUnitReverb) SetWetDryMix ¶
func (a AVAudioUnitReverb) SetWetDryMix(value float32)
func (AVAudioUnitReverb) WetDryMix ¶
func (a AVAudioUnitReverb) WetDryMix() float32
The blend of the wet and dry signals.
Discussion ¶
You specify the blend as a percentage. The range is `0%` through `100%`, where `0%` represents all dry.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitReverb/wetDryMix
type AVAudioUnitReverbClass ¶
type AVAudioUnitReverbClass struct {
// contains filtered or unexported fields
}
func GetAVAudioUnitReverbClass ¶
func GetAVAudioUnitReverbClass() AVAudioUnitReverbClass
GetAVAudioUnitReverbClass returns the class object for AVAudioUnitReverb.
func (AVAudioUnitReverbClass) Alloc ¶
func (ac AVAudioUnitReverbClass) Alloc() AVAudioUnitReverb
Alloc allocates memory for a new instance of the class.
func (AVAudioUnitReverbClass) Class ¶
func (ac AVAudioUnitReverbClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioUnitReverbPreset ¶
type AVAudioUnitReverbPreset int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitReverbPreset
const ( // AVAudioUnitReverbPresetCathedral: A preset that represents a reverb with the acoustic characteristics of a cathedral environment. AVAudioUnitReverbPresetCathedral AVAudioUnitReverbPreset = 8 // AVAudioUnitReverbPresetLargeChamber: A preset that represents a reverb with the acoustic characteristics of a large-sized chamber environment. AVAudioUnitReverbPresetLargeChamber AVAudioUnitReverbPreset = 7 // AVAudioUnitReverbPresetLargeHall: A preset that represents a reverb with the acoustic characteristics of a large-sized hall environment. AVAudioUnitReverbPresetLargeHall AVAudioUnitReverbPreset = 4 // AVAudioUnitReverbPresetLargeHall2: A preset that represents a reverb with the acoustic characteristics of an alternative large-sized hall environment. AVAudioUnitReverbPresetLargeHall2 AVAudioUnitReverbPreset = 12 // AVAudioUnitReverbPresetLargeRoom: A preset that represents a reverb with the acoustic characteristics of a large-sized room environment. AVAudioUnitReverbPresetLargeRoom AVAudioUnitReverbPreset = 2 // AVAudioUnitReverbPresetLargeRoom2: A preset that represents a reverb with the acoustic characteristics of an alternative large-sized room environment. AVAudioUnitReverbPresetLargeRoom2 AVAudioUnitReverbPreset = 9 // AVAudioUnitReverbPresetMediumChamber: A preset that represents a reverb with the acoustic characteristics of a medium-sized chamber environment. AVAudioUnitReverbPresetMediumChamber AVAudioUnitReverbPreset = 6 // AVAudioUnitReverbPresetMediumHall: A preset that represents a reverb with the acoustic characteristics of a medium-sized hall environment. AVAudioUnitReverbPresetMediumHall AVAudioUnitReverbPreset = 3 // AVAudioUnitReverbPresetMediumHall2: A preset that represents a reverb with the acoustic characteristics of an alternative medium-sized hall environment. AVAudioUnitReverbPresetMediumHall2 AVAudioUnitReverbPreset = 10 // AVAudioUnitReverbPresetMediumHall3: A preset that represents a reverb with the acoustic characteristics of an alternative medium-sized hall environment. AVAudioUnitReverbPresetMediumHall3 AVAudioUnitReverbPreset = 11 // AVAudioUnitReverbPresetMediumRoom: A preset that represents a reverb with the acoustic characteristics of a medium-sized room environment. AVAudioUnitReverbPresetMediumRoom AVAudioUnitReverbPreset = 1 // AVAudioUnitReverbPresetPlate: A preset that represents a reverb with the acoustic characteristics of a plate environment. AVAudioUnitReverbPresetPlate AVAudioUnitReverbPreset = 5 // AVAudioUnitReverbPresetSmallRoom: A preset that represents a reverb with the acoustic characteristics of a small-sized room environment. AVAudioUnitReverbPresetSmallRoom AVAudioUnitReverbPreset = 0 )
func (AVAudioUnitReverbPreset) String ¶
func (e AVAudioUnitReverbPreset) String() string
type AVAudioUnitSampler ¶
type AVAudioUnitSampler struct {
AVAudioUnitMIDIInstrument
}
An object that you configure with one or more instrument samples, based on Apple’s Sampler audio unit.
Overview ¶
An AVAudioUnitSampler is an AVAudioUnit for Apple’s Sampler audio unit.
You configure the sampler by loading instruments from different types of files. These include an `aupreset` file, DLS, or SF2 sound bank; an EXS24 instrument; a single audio file; or an array of audio files.
The output of a AVAudioUnitSampler is a single stereo bus.
Configuring the Sampler Audio Unit ¶
- AVAudioUnitSampler.LoadInstrumentAtURLError: Configures the sampler with the specified instrument file.
- AVAudioUnitSampler.LoadAudioFilesAtURLsError: Configures the sampler by loading the specified audio files.
- AVAudioUnitSampler.LoadSoundBankInstrumentAtURLProgramBankMSBBankLSBError: Loads a specific instrument from the specified soundbank.
Getting and Setting Sampler Values ¶
- AVAudioUnitSampler.GlobalTuning: An adjustment for the tuning of all the played notes.
- AVAudioUnitSampler.SetGlobalTuning
- AVAudioUnitSampler.OverallGain: An adjustment for the gain of all the played notes, in decibels.
- AVAudioUnitSampler.SetOverallGain
- AVAudioUnitSampler.StereoPan: An adjustment for the stereo panning of all the played notes.
- AVAudioUnitSampler.SetStereoPan
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitSampler
func AVAudioUnitSamplerFromID ¶
func AVAudioUnitSamplerFromID(id objc.ID) AVAudioUnitSampler
AVAudioUnitSamplerFromID constructs a AVAudioUnitSampler from an objc.ID.
An object that you configure with one or more instrument samples, based on Apple’s Sampler audio unit.
func NewAVAudioUnitSampler ¶
func NewAVAudioUnitSampler() AVAudioUnitSampler
NewAVAudioUnitSampler creates a new AVAudioUnitSampler instance.
func NewAudioUnitSamplerWithAudioComponentDescription ¶
func NewAudioUnitSamplerWithAudioComponentDescription(description objectivec.IObject) AVAudioUnitSampler
Creates a MIDI instrument audio unit with the component description you specify.
description: The description of the audio component.
Return Value ¶
A new AVAudioUnitMIDIInstrument instance.
Discussion ¶
The component type must be `kAudioUnitType_MusicDevice` or `kAudioUnitType_RemoteInstrument`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitMIDIInstrument/init(audioComponentDescription:) description is a [audiotoolbox.AudioComponentDescription].
func (AVAudioUnitSampler) Autorelease ¶
func (a AVAudioUnitSampler) Autorelease() AVAudioUnitSampler
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioUnitSampler) GlobalTuning ¶
func (a AVAudioUnitSampler) GlobalTuning() float32
An adjustment for the tuning of all the played notes.
Discussion ¶
The tuning unit is cents, and defaults to `0.0`. The range of valid values is `-2400` to `2400` cents.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitSampler/globalTuning
func (AVAudioUnitSampler) Init ¶
func (a AVAudioUnitSampler) Init() AVAudioUnitSampler
Init initializes the instance.
func (AVAudioUnitSampler) LoadAudioFilesAtURLsError ¶
func (a AVAudioUnitSampler) LoadAudioFilesAtURLsError(audioFiles []foundation.NSURL) (bool, error)
Configures the sampler by loading the specified audio files.
audioFiles: An array of audio file URLs to load.
Discussion ¶
The framework loads the audio files into a new instrument with each audio file in its own sampler zone. The framework uses any information in the audio file for its placement in the instrument. For example, the root key and key range.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitSampler/loadAudioFiles(at:)
func (AVAudioUnitSampler) LoadInstrumentAtURLError ¶
func (a AVAudioUnitSampler) LoadInstrumentAtURLError(instrumentURL foundation.INSURL) (bool, error)
Configures the sampler with the specified instrument file.
instrumentURL: The URL of the file that contains the instrument.
Discussion ¶
The instrument can be one of the following types: Logic or GarageBand [EXS24], the sampler’s native `aupreset` file, or an audio file, such as `caf`, `aiff`, `wav`, or `mp3`.
For a single audio file, the framework loads it into a new default instrument and uses any information in the audio file, such as the root key and key range, for its placement in the instrument.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitSampler/loadInstrument(at:)
func (AVAudioUnitSampler) LoadSoundBankInstrumentAtURLProgramBankMSBBankLSBError ¶
func (a AVAudioUnitSampler) LoadSoundBankInstrumentAtURLProgramBankMSBBankLSBError(bankURL foundation.INSURL, program uint8, bankMSB uint8, bankLSB uint8) (bool, error)
Loads a specific instrument from the specified soundbank.
bankURL: The URL for a soundbank file, either a DLS bank (`XCUIElementTypeDls`) or a SoundFont bank (`XCUIElementTypeSf2`).
program: The program number for the instrument to load.
bankMSB: The most significant bit for the bank number for the instrument to load. This is usually `0x79` for melodic instruments and `0x78` for percussion instruments.
bankLSB: The least significant bit for the bank number for the instrument to load. This is often `0` and represents the bank variation.
Discussion ¶
func (AVAudioUnitSampler) OverallGain ¶
func (a AVAudioUnitSampler) OverallGain() float32
An adjustment for the gain of all the played notes, in decibels.
Discussion ¶
The default value is `0.0` dB, and the range of valid values is `-90.0` dB to `12.0` dB.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitSampler/overallGain
func (AVAudioUnitSampler) SetGlobalTuning ¶
func (a AVAudioUnitSampler) SetGlobalTuning(value float32)
func (AVAudioUnitSampler) SetOverallGain ¶
func (a AVAudioUnitSampler) SetOverallGain(value float32)
func (AVAudioUnitSampler) SetStereoPan ¶
func (a AVAudioUnitSampler) SetStereoPan(value float32)
func (AVAudioUnitSampler) StereoPan ¶
func (a AVAudioUnitSampler) StereoPan() float32
An adjustment for the stereo panning of all the played notes.
Discussion ¶
The default value is `0.0`, and the range of valid values is `-100.0` to `100.0`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitSampler/stereoPan
type AVAudioUnitSamplerClass ¶
type AVAudioUnitSamplerClass struct {
// contains filtered or unexported fields
}
func GetAVAudioUnitSamplerClass ¶
func GetAVAudioUnitSamplerClass() AVAudioUnitSamplerClass
GetAVAudioUnitSamplerClass returns the class object for AVAudioUnitSampler.
func (AVAudioUnitSamplerClass) Alloc ¶
func (ac AVAudioUnitSamplerClass) Alloc() AVAudioUnitSampler
Alloc allocates memory for a new instance of the class.
func (AVAudioUnitSamplerClass) Class ¶
func (ac AVAudioUnitSamplerClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioUnitTimeEffect ¶
type AVAudioUnitTimeEffect struct {
AVAudioUnit
}
An object that processes audio in nonreal time.
Overview ¶
A time effect audio unit represents an AVAudioUnit with a type `kAudioUnitType_FormatConverter` (`aufc)`. These effects don’t process audio in real time. The AVAudioUnitVarispeed class is an example of a time effect unit.
Creating a time effect ¶
- AVAudioUnitTimeEffect.InitWithAudioComponentDescription: Creates a time effect audio unit with the specified description.
Getting and setting the time effect ¶
- AVAudioUnitTimeEffect.Bypass: The bypass state of the audio unit.
- AVAudioUnitTimeEffect.SetBypass
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTimeEffect
func AVAudioUnitTimeEffectFromID ¶
func AVAudioUnitTimeEffectFromID(id objc.ID) AVAudioUnitTimeEffect
AVAudioUnitTimeEffectFromID constructs a AVAudioUnitTimeEffect from an objc.ID.
An object that processes audio in nonreal time.
func NewAVAudioUnitTimeEffect ¶
func NewAVAudioUnitTimeEffect() AVAudioUnitTimeEffect
NewAVAudioUnitTimeEffect creates a new AVAudioUnitTimeEffect instance.
func NewAudioUnitTimeEffectWithAudioComponentDescription ¶
func NewAudioUnitTimeEffectWithAudioComponentDescription(audioComponentDescription objectivec.IObject) AVAudioUnitTimeEffect
Creates a time effect audio unit with the specified description.
audioComponentDescription: The description of the audio unit to create.
Return Value ¶
A new AVAudioUnitTimeEffect instance.
Discussion ¶
The `componentType` field of the description structure must be `kAudioUnitType_FormatConverter` (”`aufc`”); otherwise, the method raises an exception.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTimeEffect/init(audioComponentDescription:) audioComponentDescription is a [audiotoolbox.AudioComponentDescription].
func (AVAudioUnitTimeEffect) Autorelease ¶
func (a AVAudioUnitTimeEffect) Autorelease() AVAudioUnitTimeEffect
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioUnitTimeEffect) Bypass ¶
func (a AVAudioUnitTimeEffect) Bypass() bool
The bypass state of the audio unit.
Discussion ¶
If true, the audio unit bypasses audio processing.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTimeEffect/bypass
func (AVAudioUnitTimeEffect) Init ¶
func (a AVAudioUnitTimeEffect) Init() AVAudioUnitTimeEffect
Init initializes the instance.
func (AVAudioUnitTimeEffect) InitWithAudioComponentDescription ¶
func (a AVAudioUnitTimeEffect) InitWithAudioComponentDescription(audioComponentDescription objectivec.IObject) AVAudioUnitTimeEffect
Creates a time effect audio unit with the specified description.
audioComponentDescription: The description of the audio unit to create.
audioComponentDescription is a [audiotoolbox.AudioComponentDescription].
Return Value ¶
A new AVAudioUnitTimeEffect instance.
Discussion ¶
The `componentType` field of the description structure must be `kAudioUnitType_FormatConverter` (”`aufc`”); otherwise, the method raises an exception.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTimeEffect/init(audioComponentDescription:) audioComponentDescription is a [audiotoolbox.AudioComponentDescription].
func (AVAudioUnitTimeEffect) SetBypass ¶
func (a AVAudioUnitTimeEffect) SetBypass(value bool)
type AVAudioUnitTimeEffectClass ¶
type AVAudioUnitTimeEffectClass struct {
// contains filtered or unexported fields
}
func GetAVAudioUnitTimeEffectClass ¶
func GetAVAudioUnitTimeEffectClass() AVAudioUnitTimeEffectClass
GetAVAudioUnitTimeEffectClass returns the class object for AVAudioUnitTimeEffect.
func (AVAudioUnitTimeEffectClass) Alloc ¶
func (ac AVAudioUnitTimeEffectClass) Alloc() AVAudioUnitTimeEffect
Alloc allocates memory for a new instance of the class.
func (AVAudioUnitTimeEffectClass) Class ¶
func (ac AVAudioUnitTimeEffectClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioUnitTimePitch ¶
type AVAudioUnitTimePitch struct {
AVAudioUnitTimeEffect
}
An object that provides a good-quality playback rate and pitch shifting independently of each other.
Getting and setting time pitch values ¶
- AVAudioUnitTimePitch.Overlap: The amount of overlap between segments of the input audio signal.
- AVAudioUnitTimePitch.SetOverlap
- AVAudioUnitTimePitch.Pitch: The amount to use to pitch shift the input signal.
- AVAudioUnitTimePitch.SetPitch
- AVAudioUnitTimePitch.Rate: The playback rate of the input signal.
- AVAudioUnitTimePitch.SetRate
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTimePitch
func AVAudioUnitTimePitchFromID ¶
func AVAudioUnitTimePitchFromID(id objc.ID) AVAudioUnitTimePitch
AVAudioUnitTimePitchFromID constructs a AVAudioUnitTimePitch from an objc.ID.
An object that provides a good-quality playback rate and pitch shifting independently of each other.
func NewAVAudioUnitTimePitch ¶
func NewAVAudioUnitTimePitch() AVAudioUnitTimePitch
NewAVAudioUnitTimePitch creates a new AVAudioUnitTimePitch instance.
func NewAudioUnitTimePitchWithAudioComponentDescription ¶
func NewAudioUnitTimePitchWithAudioComponentDescription(audioComponentDescription objectivec.IObject) AVAudioUnitTimePitch
Creates a time effect audio unit with the specified description.
audioComponentDescription: The description of the audio unit to create.
Return Value ¶
A new AVAudioUnitTimeEffect instance.
Discussion ¶
The `componentType` field of the description structure must be `kAudioUnitType_FormatConverter` (”`aufc`”); otherwise, the method raises an exception.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTimeEffect/init(audioComponentDescription:) audioComponentDescription is a [audiotoolbox.AudioComponentDescription].
func (AVAudioUnitTimePitch) Autorelease ¶
func (a AVAudioUnitTimePitch) Autorelease() AVAudioUnitTimePitch
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioUnitTimePitch) Init ¶
func (a AVAudioUnitTimePitch) Init() AVAudioUnitTimePitch
Init initializes the instance.
func (AVAudioUnitTimePitch) Overlap ¶
func (a AVAudioUnitTimePitch) Overlap() float32
The amount of overlap between segments of the input audio signal.
Discussion ¶
A higher value results in fewer artifacts in the output signal. The default value is `8.0`. The range of values is `3.0` to `32.0`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTimePitch/overlap
func (AVAudioUnitTimePitch) Pitch ¶
func (a AVAudioUnitTimePitch) Pitch() float32
The amount to use to pitch shift the input signal.
Discussion ¶
The audio unit measures the pitch in , a logarithmic value you use for measuring musical intervals. One octave is equal to 1200 cents. One musical semitone is equal to 100 cents.
The default value is `0“.0`. The range of values is `-2400` to `2400`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTimePitch/pitch
func (AVAudioUnitTimePitch) Rate ¶
func (a AVAudioUnitTimePitch) Rate() float32
The playback rate of the input signal.
Discussion ¶
The default value is 1.0. The range of supported values is `1/32` to `32.0`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTimePitch/rate
func (AVAudioUnitTimePitch) SetOverlap ¶
func (a AVAudioUnitTimePitch) SetOverlap(value float32)
func (AVAudioUnitTimePitch) SetPitch ¶
func (a AVAudioUnitTimePitch) SetPitch(value float32)
func (AVAudioUnitTimePitch) SetRate ¶
func (a AVAudioUnitTimePitch) SetRate(value float32)
type AVAudioUnitTimePitchClass ¶
type AVAudioUnitTimePitchClass struct {
// contains filtered or unexported fields
}
func GetAVAudioUnitTimePitchClass ¶
func GetAVAudioUnitTimePitchClass() AVAudioUnitTimePitchClass
GetAVAudioUnitTimePitchClass returns the class object for AVAudioUnitTimePitch.
func (AVAudioUnitTimePitchClass) Alloc ¶
func (ac AVAudioUnitTimePitchClass) Alloc() AVAudioUnitTimePitch
Alloc allocates memory for a new instance of the class.
func (AVAudioUnitTimePitchClass) Class ¶
func (ac AVAudioUnitTimePitchClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioUnitVarispeed ¶
type AVAudioUnitVarispeed struct {
AVAudioUnitTimeEffect
}
An object that allows control of the playback rate.
Getting and setting the playback rate ¶
- AVAudioUnitVarispeed.Rate: The audio playback rate.
- AVAudioUnitVarispeed.SetRate
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitVarispeed
func AVAudioUnitVarispeedFromID ¶
func AVAudioUnitVarispeedFromID(id objc.ID) AVAudioUnitVarispeed
AVAudioUnitVarispeedFromID constructs a AVAudioUnitVarispeed from an objc.ID.
An object that allows control of the playback rate.
func NewAVAudioUnitVarispeed ¶
func NewAVAudioUnitVarispeed() AVAudioUnitVarispeed
NewAVAudioUnitVarispeed creates a new AVAudioUnitVarispeed instance.
func NewAudioUnitVarispeedWithAudioComponentDescription ¶
func NewAudioUnitVarispeedWithAudioComponentDescription(audioComponentDescription objectivec.IObject) AVAudioUnitVarispeed
Creates a time effect audio unit with the specified description.
audioComponentDescription: The description of the audio unit to create.
Return Value ¶
A new AVAudioUnitTimeEffect instance.
Discussion ¶
The `componentType` field of the description structure must be `kAudioUnitType_FormatConverter` (”`aufc`”); otherwise, the method raises an exception.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTimeEffect/init(audioComponentDescription:) audioComponentDescription is a [audiotoolbox.AudioComponentDescription].
func (AVAudioUnitVarispeed) Autorelease ¶
func (a AVAudioUnitVarispeed) Autorelease() AVAudioUnitVarispeed
Autorelease adds the receiver to the current autorelease pool.
func (AVAudioUnitVarispeed) Init ¶
func (a AVAudioUnitVarispeed) Init() AVAudioUnitVarispeed
Init initializes the instance.
func (AVAudioUnitVarispeed) Rate ¶
func (a AVAudioUnitVarispeed) Rate() float32
The audio playback rate.
Discussion ¶
The varispeed audio unit resamples the input signal, and as a result, changing the playback rate also changes the pitch. For example, changing the rate to `2.0` results in the output audio playing one octave higher. Similarly changing the rate to `0.5`, results in the output audio playing one octave lower.
The audio unit measures the pitch in , a logarithmic value you use for measuring musical intervals. One octave is equal to 1200 cents. One musical semitone is equal to 100 cents.
Using the `rate` value you calculate the pitch (in cents) using the formula `pitch = 1200.0 * log2(rate)`. Conversely, you calculate the appropriate `rate` for a desired pitch with the formula `rate = pow(2, cents/1200.0)`.
The default value is `1.0`. The range of values is `0.25` to `4.0`.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitVarispeed/rate
func (AVAudioUnitVarispeed) SetRate ¶
func (a AVAudioUnitVarispeed) SetRate(value float32)
type AVAudioUnitVarispeedClass ¶
type AVAudioUnitVarispeedClass struct {
// contains filtered or unexported fields
}
func GetAVAudioUnitVarispeedClass ¶
func GetAVAudioUnitVarispeedClass() AVAudioUnitVarispeedClass
GetAVAudioUnitVarispeedClass returns the class object for AVAudioUnitVarispeed.
func (AVAudioUnitVarispeedClass) Alloc ¶
func (ac AVAudioUnitVarispeedClass) Alloc() AVAudioUnitVarispeed
Alloc allocates memory for a new instance of the class.
func (AVAudioUnitVarispeedClass) Class ¶
func (ac AVAudioUnitVarispeedClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVAudioVoiceProcessingOtherAudioDuckingConfiguration ¶
type AVAudioVoiceProcessingOtherAudioDuckingConfiguration struct {
EnableAdvancedDucking bool // Enables advanced ducking which ducks other audio based on the presence of voice activity from local and remote chat participants.
DuckingLevel AVAudioVoiceProcessingOtherAudioDuckingLevel // The ducking level of other audio.
}
AVAudioVoiceProcessingOtherAudioDuckingConfiguration - The configuration of ducking non-voice audio.
[Full Topic] [Full Topic]: https://developer.apple.com/documentation/AVFAudio/AVAudioVoiceProcessingOtherAudioDuckingConfiguration
type AVAudioVoiceProcessingOtherAudioDuckingLevel ¶
type AVAudioVoiceProcessingOtherAudioDuckingLevel int
const ( // AVAudioVoiceProcessingOtherAudioDuckingLevelDefault: The default ducking level for typical voice chat. AVAudioVoiceProcessingOtherAudioDuckingLevelDefault AVAudioVoiceProcessingOtherAudioDuckingLevel = 0 // AVAudioVoiceProcessingOtherAudioDuckingLevelMax: Applies maximum ducking to other audio. AVAudioVoiceProcessingOtherAudioDuckingLevelMax AVAudioVoiceProcessingOtherAudioDuckingLevel = 30 // AVAudioVoiceProcessingOtherAudioDuckingLevelMid: Applies medium ducking to other audio. AVAudioVoiceProcessingOtherAudioDuckingLevelMid AVAudioVoiceProcessingOtherAudioDuckingLevel = 20 // AVAudioVoiceProcessingOtherAudioDuckingLevelMin: Applies minimum ducking to other audio. AVAudioVoiceProcessingOtherAudioDuckingLevelMin AVAudioVoiceProcessingOtherAudioDuckingLevel = 10 )
func (AVAudioVoiceProcessingOtherAudioDuckingLevel) String ¶
func (e AVAudioVoiceProcessingOtherAudioDuckingLevel) String() string
type AVAudioVoiceProcessingSpeechActivityEvent ¶
type AVAudioVoiceProcessingSpeechActivityEvent int
See: https://developer.apple.com/documentation/AVFAudio/AVAudioVoiceProcessingSpeechActivityEvent
const ( // AVAudioVoiceProcessingSpeechActivityEnded: Indicates the end of speech activity. AVAudioVoiceProcessingSpeechActivityEnded AVAudioVoiceProcessingSpeechActivityEvent = 1 // AVAudioVoiceProcessingSpeechActivityStarted: Indicates the start of speech activity. AVAudioVoiceProcessingSpeechActivityStarted AVAudioVoiceProcessingSpeechActivityEvent = 0 )
func (AVAudioVoiceProcessingSpeechActivityEvent) String ¶
func (e AVAudioVoiceProcessingSpeechActivityEvent) String() string
type AVAudioVoiceProcessingSpeechActivityEventHandler ¶
type AVAudioVoiceProcessingSpeechActivityEventHandler = func(AVAudioVoiceProcessingSpeechActivityEvent)
AVAudioVoiceProcessingSpeechActivityEventHandler handles completion with a primitive value.
Used by:
type AVBeatRange ¶
AVBeatRange - A specific time range within a music track.
[Full Topic] [Full Topic]: https://developer.apple.com/documentation/AVFAudio/AVBeatRange-c.struct
type AVExtendedNoteOnEvent ¶
type AVExtendedNoteOnEvent struct {
AVMusicEvent
}
An object that represents a custom extension of a MIDI note on event.
Overview ¶
Use this to allow an app to trigger a custom note on event on one of several Apple audio units that support it. The floating point note and velocity numbers allow for optional fractional control of the note’s runtime properties that the system modulates by those inputs. This event supports the possibility of an audio unit with more than the standard 16 MIDI channels.
Creating a Note On Event ¶
- AVExtendedNoteOnEvent.InitWithMIDINoteVelocityGroupIDDuration: Creates an event with a MIDI note, velocity, group identifier, and duration.
- AVExtendedNoteOnEvent.InitWithMIDINoteVelocityInstrumentIDGroupIDDuration: Creates a note on event with the default instrument.
Configuring a Note On Event ¶
- AVExtendedNoteOnEvent.MidiNote: The MIDI note number.
- AVExtendedNoteOnEvent.SetMidiNote
- AVExtendedNoteOnEvent.Velocity: The MDI velocity.
- AVExtendedNoteOnEvent.SetVelocity
- AVExtendedNoteOnEvent.InstrumentID: The instrument identifier.
- AVExtendedNoteOnEvent.SetInstrumentID
- AVExtendedNoteOnEvent.GroupID: The audio unit channel that handles the event.
- AVExtendedNoteOnEvent.SetGroupID
- AVExtendedNoteOnEvent.Duration: The duration of the event, in beats.
- AVExtendedNoteOnEvent.SetDuration
See: https://developer.apple.com/documentation/AVFAudio/AVExtendedNoteOnEvent
func AVExtendedNoteOnEventFromID ¶
func AVExtendedNoteOnEventFromID(id objc.ID) AVExtendedNoteOnEvent
AVExtendedNoteOnEventFromID constructs a AVExtendedNoteOnEvent from an objc.ID.
An object that represents a custom extension of a MIDI note on event.
func NewAVExtendedNoteOnEvent ¶
func NewAVExtendedNoteOnEvent() AVExtendedNoteOnEvent
NewAVExtendedNoteOnEvent creates a new AVExtendedNoteOnEvent instance.
func NewExtendedNoteOnEventWithMIDINoteVelocityGroupIDDuration ¶
func NewExtendedNoteOnEventWithMIDINoteVelocityGroupIDDuration(midiNote float32, velocity float32, groupID uint32, duration AVMusicTimeStamp) AVExtendedNoteOnEvent
Creates an event with a MIDI note, velocity, group identifier, and duration.
midiNote: The MIDI note number.
velocity: The MIDI velocity.
groupID: The identifier that represents the audio unit channel that handles the event.
duration: The duration of the event, in beats.
func NewExtendedNoteOnEventWithMIDINoteVelocityInstrumentIDGroupIDDuration ¶
func NewExtendedNoteOnEventWithMIDINoteVelocityInstrumentIDGroupIDDuration(midiNote float32, velocity float32, instrumentID uint32, groupID uint32, duration AVMusicTimeStamp) AVExtendedNoteOnEvent
Creates a note on event with the default instrument.
midiNote: The MIDI note number.
velocity: The MIDI velocity.
instrumentID: The default instrument.
groupID: The identifier that represents the audio unit channel that handles the event.
duration: The duration of the event, in beats.
Discussion ¶
Use defaultInstrument when you set `instrumentID`.
func (AVExtendedNoteOnEvent) Autorelease ¶
func (e AVExtendedNoteOnEvent) Autorelease() AVExtendedNoteOnEvent
Autorelease adds the receiver to the current autorelease pool.
func (AVExtendedNoteOnEvent) Duration ¶
func (e AVExtendedNoteOnEvent) Duration() AVMusicTimeStamp
The duration of the event, in beats.
See: https://developer.apple.com/documentation/AVFAudio/AVExtendedNoteOnEvent/duration
func (AVExtendedNoteOnEvent) GroupID ¶
func (e AVExtendedNoteOnEvent) GroupID() uint32
The audio unit channel that handles the event.
Discussion ¶
The valid range of values are between `0` and `15`, but can be higher if the AVMusicTrack destination audio unit supports more channels.
See: https://developer.apple.com/documentation/AVFAudio/AVExtendedNoteOnEvent/groupID
func (AVExtendedNoteOnEvent) Init ¶
func (e AVExtendedNoteOnEvent) Init() AVExtendedNoteOnEvent
Init initializes the instance.
func (AVExtendedNoteOnEvent) InitWithMIDINoteVelocityGroupIDDuration ¶
func (e AVExtendedNoteOnEvent) InitWithMIDINoteVelocityGroupIDDuration(midiNote float32, velocity float32, groupID uint32, duration AVMusicTimeStamp) AVExtendedNoteOnEvent
Creates an event with a MIDI note, velocity, group identifier, and duration.
midiNote: The MIDI note number.
velocity: The MIDI velocity.
groupID: The identifier that represents the audio unit channel that handles the event.
duration: The duration of the event, in beats.
func (AVExtendedNoteOnEvent) InitWithMIDINoteVelocityInstrumentIDGroupIDDuration ¶
func (e AVExtendedNoteOnEvent) InitWithMIDINoteVelocityInstrumentIDGroupIDDuration(midiNote float32, velocity float32, instrumentID uint32, groupID uint32, duration AVMusicTimeStamp) AVExtendedNoteOnEvent
Creates a note on event with the default instrument.
midiNote: The MIDI note number.
velocity: The MIDI velocity.
instrumentID: The default instrument.
groupID: The identifier that represents the audio unit channel that handles the event.
duration: The duration of the event, in beats.
Discussion ¶
Use defaultInstrument when you set `instrumentID`.
func (AVExtendedNoteOnEvent) InstrumentID ¶
func (e AVExtendedNoteOnEvent) InstrumentID() uint32
The instrument identifier.
Discussion ¶
Set this value to defaultInstrument.
See: https://developer.apple.com/documentation/AVFAudio/AVExtendedNoteOnEvent/instrumentID
func (AVExtendedNoteOnEvent) MidiNote ¶
func (e AVExtendedNoteOnEvent) MidiNote() float32
The MIDI note number.
Discussion ¶
If the instrument within the AVMusicTrack destination audio unit supports fractional values, you use this to generate arbitrary tunings. The valid range of values depends on the destination audio unit, and is usually between `0.0` and `127.0`.
See: https://developer.apple.com/documentation/AVFAudio/AVExtendedNoteOnEvent/midiNote
func (AVExtendedNoteOnEvent) SetDuration ¶
func (e AVExtendedNoteOnEvent) SetDuration(value AVMusicTimeStamp)
func (AVExtendedNoteOnEvent) SetGroupID ¶
func (e AVExtendedNoteOnEvent) SetGroupID(value uint32)
func (AVExtendedNoteOnEvent) SetInstrumentID ¶
func (e AVExtendedNoteOnEvent) SetInstrumentID(value uint32)
func (AVExtendedNoteOnEvent) SetMidiNote ¶
func (e AVExtendedNoteOnEvent) SetMidiNote(value float32)
func (AVExtendedNoteOnEvent) SetVelocity ¶
func (e AVExtendedNoteOnEvent) SetVelocity(value float32)
func (AVExtendedNoteOnEvent) Velocity ¶
func (e AVExtendedNoteOnEvent) Velocity() float32
The MDI velocity.
Discussion ¶
If the instrument in the AVMusicTrack destination audio unit supports fractional values, use this to generate precise changes in gain and other values. The valid range of values depend on the destination audio unit, and is usually between `0.0` and `127.0`.
See: https://developer.apple.com/documentation/AVFAudio/AVExtendedNoteOnEvent/velocity
type AVExtendedNoteOnEventClass ¶
type AVExtendedNoteOnEventClass struct {
// contains filtered or unexported fields
}
func GetAVExtendedNoteOnEventClass ¶
func GetAVExtendedNoteOnEventClass() AVExtendedNoteOnEventClass
GetAVExtendedNoteOnEventClass returns the class object for AVExtendedNoteOnEvent.
func (AVExtendedNoteOnEventClass) Alloc ¶
func (ac AVExtendedNoteOnEventClass) Alloc() AVExtendedNoteOnEvent
Alloc allocates memory for a new instance of the class.
func (AVExtendedNoteOnEventClass) Class ¶
func (ac AVExtendedNoteOnEventClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVExtendedTempoEvent ¶
type AVExtendedTempoEvent struct {
AVMusicEvent
}
An object that represents a tempo change to a specific beats-per-minute value.
Creating a Tempo Event ¶
- AVExtendedTempoEvent.InitWithTempo: Creates an extended tempo event.
Configuring a Tempo Event ¶
- AVExtendedTempoEvent.Tempo: The tempo in beats per minute as a positive value.
- AVExtendedTempoEvent.SetTempo
See: https://developer.apple.com/documentation/AVFAudio/AVExtendedTempoEvent
func AVExtendedTempoEventFromID ¶
func AVExtendedTempoEventFromID(id objc.ID) AVExtendedTempoEvent
AVExtendedTempoEventFromID constructs a AVExtendedTempoEvent from an objc.ID.
An object that represents a tempo change to a specific beats-per-minute value.
func NewAVExtendedTempoEvent ¶
func NewAVExtendedTempoEvent() AVExtendedTempoEvent
NewAVExtendedTempoEvent creates a new AVExtendedTempoEvent instance.
func NewExtendedTempoEventWithTempo ¶
func NewExtendedTempoEventWithTempo(tempo float64) AVExtendedTempoEvent
Creates an extended tempo event.
tempo: The tempo in beats per minute as a positive value.
Discussion ¶
The new tempo begins at the timestamp for this event.
See: https://developer.apple.com/documentation/AVFAudio/AVExtendedTempoEvent/init(tempo:)
func (AVExtendedTempoEvent) Autorelease ¶
func (e AVExtendedTempoEvent) Autorelease() AVExtendedTempoEvent
Autorelease adds the receiver to the current autorelease pool.
func (AVExtendedTempoEvent) Init ¶
func (e AVExtendedTempoEvent) Init() AVExtendedTempoEvent
Init initializes the instance.
func (AVExtendedTempoEvent) InitWithTempo ¶
func (e AVExtendedTempoEvent) InitWithTempo(tempo float64) AVExtendedTempoEvent
Creates an extended tempo event.
tempo: The tempo in beats per minute as a positive value.
Discussion ¶
The new tempo begins at the timestamp for this event.
See: https://developer.apple.com/documentation/AVFAudio/AVExtendedTempoEvent/init(tempo:)
func (AVExtendedTempoEvent) SetTempo ¶
func (e AVExtendedTempoEvent) SetTempo(value float64)
func (AVExtendedTempoEvent) Tempo ¶
func (e AVExtendedTempoEvent) Tempo() float64
The tempo in beats per minute as a positive value.
See: https://developer.apple.com/documentation/AVFAudio/AVExtendedTempoEvent/tempo
type AVExtendedTempoEventClass ¶
type AVExtendedTempoEventClass struct {
// contains filtered or unexported fields
}
func GetAVExtendedTempoEventClass ¶
func GetAVExtendedTempoEventClass() AVExtendedTempoEventClass
GetAVExtendedTempoEventClass returns the class object for AVExtendedTempoEvent.
func (AVExtendedTempoEventClass) Alloc ¶
func (ac AVExtendedTempoEventClass) Alloc() AVExtendedTempoEvent
Alloc allocates memory for a new instance of the class.
func (AVExtendedTempoEventClass) Class ¶
func (ac AVExtendedTempoEventClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVMIDIChannelEvent ¶
type AVMIDIChannelEvent struct {
AVMusicEvent
}
A base class for all MIDI messages that operate on a single MIDI channel.
Configuring a Channel Event ¶
- AVMIDIChannelEvent.Channel: The MIDI channel.
- AVMIDIChannelEvent.SetChannel
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIChannelEvent
func AVMIDIChannelEventFromID ¶
func AVMIDIChannelEventFromID(id objc.ID) AVMIDIChannelEvent
AVMIDIChannelEventFromID constructs a AVMIDIChannelEvent from an objc.ID.
A base class for all MIDI messages that operate on a single MIDI channel.
func NewAVMIDIChannelEvent ¶
func NewAVMIDIChannelEvent() AVMIDIChannelEvent
NewAVMIDIChannelEvent creates a new AVMIDIChannelEvent instance.
func (AVMIDIChannelEvent) Autorelease ¶
func (m AVMIDIChannelEvent) Autorelease() AVMIDIChannelEvent
Autorelease adds the receiver to the current autorelease pool.
func (AVMIDIChannelEvent) Channel ¶
func (m AVMIDIChannelEvent) Channel() uint32
The MIDI channel.
Discussion ¶
The valid range of values are between `0` and `15`.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIChannelEvent/channel
func (AVMIDIChannelEvent) Init ¶
func (m AVMIDIChannelEvent) Init() AVMIDIChannelEvent
Init initializes the instance.
func (AVMIDIChannelEvent) SetChannel ¶
func (m AVMIDIChannelEvent) SetChannel(value uint32)
type AVMIDIChannelEventClass ¶
type AVMIDIChannelEventClass struct {
// contains filtered or unexported fields
}
func GetAVMIDIChannelEventClass ¶
func GetAVMIDIChannelEventClass() AVMIDIChannelEventClass
GetAVMIDIChannelEventClass returns the class object for AVMIDIChannelEvent.
func (AVMIDIChannelEventClass) Alloc ¶
func (ac AVMIDIChannelEventClass) Alloc() AVMIDIChannelEvent
Alloc allocates memory for a new instance of the class.
func (AVMIDIChannelEventClass) Class ¶
func (ac AVMIDIChannelEventClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVMIDIChannelPressureEvent ¶
type AVMIDIChannelPressureEvent struct {
AVMIDIChannelEvent
}
An object that represents a MIDI channel pressure message.
Overview ¶
The effect of this message depends on the AVMusicTrack destination audio unit, and the capabilities of the destination’s loaded instrument.
Creating a Pressure Event ¶
- AVMIDIChannelPressureEvent.InitWithChannelPressure: Creates a pressure event with a channel and pressure value.
Configuring a Pressure Event ¶
- AVMIDIChannelPressureEvent.Pressure: The MIDI channel pressure.
- AVMIDIChannelPressureEvent.SetPressure
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIChannelPressureEvent
func AVMIDIChannelPressureEventFromID ¶
func AVMIDIChannelPressureEventFromID(id objc.ID) AVMIDIChannelPressureEvent
AVMIDIChannelPressureEventFromID constructs a AVMIDIChannelPressureEvent from an objc.ID.
An object that represents a MIDI channel pressure message.
func NewAVMIDIChannelPressureEvent ¶
func NewAVMIDIChannelPressureEvent() AVMIDIChannelPressureEvent
NewAVMIDIChannelPressureEvent creates a new AVMIDIChannelPressureEvent instance.
func NewMIDIChannelPressureEventWithChannelPressure ¶
func NewMIDIChannelPressureEventWithChannelPressure(channel uint32, pressure uint32) AVMIDIChannelPressureEvent
Creates a pressure event with a channel and pressure value.
func (AVMIDIChannelPressureEvent) Autorelease ¶
func (m AVMIDIChannelPressureEvent) Autorelease() AVMIDIChannelPressureEvent
Autorelease adds the receiver to the current autorelease pool.
func (AVMIDIChannelPressureEvent) Init ¶
func (m AVMIDIChannelPressureEvent) Init() AVMIDIChannelPressureEvent
Init initializes the instance.
func (AVMIDIChannelPressureEvent) InitWithChannelPressure ¶
func (m AVMIDIChannelPressureEvent) InitWithChannelPressure(channel uint32, pressure uint32) AVMIDIChannelPressureEvent
Creates a pressure event with a channel and pressure value.
func (AVMIDIChannelPressureEvent) Pressure ¶
func (m AVMIDIChannelPressureEvent) Pressure() uint32
The MIDI channel pressure.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIChannelPressureEvent/pressure
func (AVMIDIChannelPressureEvent) SetPressure ¶
func (m AVMIDIChannelPressureEvent) SetPressure(value uint32)
type AVMIDIChannelPressureEventClass ¶
type AVMIDIChannelPressureEventClass struct {
// contains filtered or unexported fields
}
func GetAVMIDIChannelPressureEventClass ¶
func GetAVMIDIChannelPressureEventClass() AVMIDIChannelPressureEventClass
GetAVMIDIChannelPressureEventClass returns the class object for AVMIDIChannelPressureEvent.
func (AVMIDIChannelPressureEventClass) Alloc ¶
func (ac AVMIDIChannelPressureEventClass) Alloc() AVMIDIChannelPressureEvent
Alloc allocates memory for a new instance of the class.
func (AVMIDIChannelPressureEventClass) Class ¶
func (ac AVMIDIChannelPressureEventClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVMIDIControlChangeEvent ¶
type AVMIDIControlChangeEvent struct {
AVMIDIChannelEvent
}
An object that represents a MIDI control change message.
Creating a Control Change Event ¶
- AVMIDIControlChangeEvent.InitWithChannelMessageTypeValue: Creates an event with a channel, control change type, and a value.
Inspecting a Control Change Event ¶
- AVMIDIControlChangeEvent.Value: The value of the control change event.
- AVMIDIControlChangeEvent.MessageType: The type of control change message.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIControlChangeEvent
func AVMIDIControlChangeEventFromID ¶
func AVMIDIControlChangeEventFromID(id objc.ID) AVMIDIControlChangeEvent
AVMIDIControlChangeEventFromID constructs a AVMIDIControlChangeEvent from an objc.ID.
An object that represents a MIDI control change message.
func NewAVMIDIControlChangeEvent ¶
func NewAVMIDIControlChangeEvent() AVMIDIControlChangeEvent
NewAVMIDIControlChangeEvent creates a new AVMIDIControlChangeEvent instance.
func NewMIDIControlChangeEventWithChannelMessageTypeValue ¶
func NewMIDIControlChangeEventWithChannelMessageTypeValue(channel uint32, messageType AVMIDIControlChangeMessageType, value uint32) AVMIDIControlChangeEvent
Creates an event with a channel, control change type, and a value.
channel: The MIDI channel for the control change, between `0` and `15`.
messageType: The type that indicates which MIDI control change message to send.
value: The value for the control change.
func (AVMIDIControlChangeEvent) Autorelease ¶
func (m AVMIDIControlChangeEvent) Autorelease() AVMIDIControlChangeEvent
Autorelease adds the receiver to the current autorelease pool.
func (AVMIDIControlChangeEvent) Init ¶
func (m AVMIDIControlChangeEvent) Init() AVMIDIControlChangeEvent
Init initializes the instance.
func (AVMIDIControlChangeEvent) InitWithChannelMessageTypeValue ¶
func (m AVMIDIControlChangeEvent) InitWithChannelMessageTypeValue(channel uint32, messageType AVMIDIControlChangeMessageType, value uint32) AVMIDIControlChangeEvent
Creates an event with a channel, control change type, and a value.
channel: The MIDI channel for the control change, between `0` and `15`.
messageType: The type that indicates which MIDI control change message to send.
value: The value for the control change.
func (AVMIDIControlChangeEvent) MessageType ¶
func (m AVMIDIControlChangeEvent) MessageType() AVMIDIControlChangeMessageType
The type of control change message.
func (AVMIDIControlChangeEvent) Value ¶
func (m AVMIDIControlChangeEvent) Value() uint32
The value of the control change event.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIControlChangeEvent/value
type AVMIDIControlChangeEventClass ¶
type AVMIDIControlChangeEventClass struct {
// contains filtered or unexported fields
}
func GetAVMIDIControlChangeEventClass ¶
func GetAVMIDIControlChangeEventClass() AVMIDIControlChangeEventClass
GetAVMIDIControlChangeEventClass returns the class object for AVMIDIControlChangeEvent.
func (AVMIDIControlChangeEventClass) Alloc ¶
func (ac AVMIDIControlChangeEventClass) Alloc() AVMIDIControlChangeEvent
Alloc allocates memory for a new instance of the class.
func (AVMIDIControlChangeEventClass) Class ¶
func (ac AVMIDIControlChangeEventClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVMIDIControlChangeMessageType ¶
type AVMIDIControlChangeMessageType int
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIControlChangeEvent/MessageType-swift.enum
const ( // AVMIDIControlChangeMessageTypeAllNotesOff: An event type for muting all sounding notes while maintaining the release time. AVMIDIControlChangeMessageTypeAllNotesOff AVMIDIControlChangeMessageType = 123 // AVMIDIControlChangeMessageTypeAllSoundOff: An event type for muting all sounding notes. AVMIDIControlChangeMessageTypeAllSoundOff AVMIDIControlChangeMessageType = 120 // AVMIDIControlChangeMessageTypeAttackTime: An event type for controlling the attack time. AVMIDIControlChangeMessageTypeAttackTime AVMIDIControlChangeMessageType = 73 // AVMIDIControlChangeMessageTypeBalance: An event type for controlling the left and right channel balance. AVMIDIControlChangeMessageTypeBalance AVMIDIControlChangeMessageType = 8 // AVMIDIControlChangeMessageTypeBankSelect: An event type for switching bank selection. AVMIDIControlChangeMessageTypeBankSelect AVMIDIControlChangeMessageType = 0 // AVMIDIControlChangeMessageTypeBreath: An event type for a breath controller. AVMIDIControlChangeMessageTypeBreath AVMIDIControlChangeMessageType = 2 // AVMIDIControlChangeMessageTypeBrightness: An event type for controlling the brightness. AVMIDIControlChangeMessageTypeBrightness AVMIDIControlChangeMessageType = 74 // AVMIDIControlChangeMessageTypeChorusLevel: An event type for controlling the chorus level. AVMIDIControlChangeMessageTypeChorusLevel AVMIDIControlChangeMessageType = 93 // AVMIDIControlChangeMessageTypeDataEntry: An event type for controlling the data entry parameters. AVMIDIControlChangeMessageTypeDataEntry AVMIDIControlChangeMessageType = 6 // AVMIDIControlChangeMessageTypeDecayTime: An event type for controlling the decay time. AVMIDIControlChangeMessageTypeDecayTime AVMIDIControlChangeMessageType = 75 // AVMIDIControlChangeMessageTypeExpression: An event type that represents an expression controller. AVMIDIControlChangeMessageTypeExpression AVMIDIControlChangeMessageType = 11 // AVMIDIControlChangeMessageTypeFilterResonance: An event type for a filter resonance. AVMIDIControlChangeMessageTypeFilterResonance AVMIDIControlChangeMessageType = 71 // AVMIDIControlChangeMessageTypeFoot: An event type for sending continuous stream of values when using a foot controller. AVMIDIControlChangeMessageTypeFoot AVMIDIControlChangeMessageType = 4 // AVMIDIControlChangeMessageTypeHold2Pedal: An event type for holding notes. AVMIDIControlChangeMessageTypeHold2Pedal AVMIDIControlChangeMessageType = 69 // AVMIDIControlChangeMessageTypeLegatoPedal: An event type for switching the legato pedal on or off. AVMIDIControlChangeMessageTypeLegatoPedal AVMIDIControlChangeMessageType = 68 // AVMIDIControlChangeMessageTypeModWheel: An event type for modulating a vibrato effect. AVMIDIControlChangeMessageTypeModWheel AVMIDIControlChangeMessageType = 1 // AVMIDIControlChangeMessageTypeMonoModeOff: An event type for setting the device mode to polyphonic. AVMIDIControlChangeMessageTypeMonoModeOff AVMIDIControlChangeMessageType = 127 // AVMIDIControlChangeMessageTypeMonoModeOn: An event type for setting the device mode to monophonic. AVMIDIControlChangeMessageTypeMonoModeOn AVMIDIControlChangeMessageType = 126 // AVMIDIControlChangeMessageTypeOmniModeOff: An event type for setting omni off mode. AVMIDIControlChangeMessageTypeOmniModeOff AVMIDIControlChangeMessageType = 124 // AVMIDIControlChangeMessageTypeOmniModeOn: An event type for setting omni on mode. AVMIDIControlChangeMessageTypeOmniModeOn AVMIDIControlChangeMessageType = 125 // AVMIDIControlChangeMessageTypePan: An event type for controlling the left and right channel pan. AVMIDIControlChangeMessageTypePan AVMIDIControlChangeMessageType = 10 // AVMIDIControlChangeMessageTypePortamento: An event type for switching portamento on or off. AVMIDIControlChangeMessageTypePortamento AVMIDIControlChangeMessageType = 65 // AVMIDIControlChangeMessageTypePortamentoTime: An event type for controlling the portamento rate. AVMIDIControlChangeMessageTypePortamentoTime AVMIDIControlChangeMessageType = 5 // AVMIDIControlChangeMessageTypeRPN_LSB: An event type that represents the registered parameter number LSB. AVMIDIControlChangeMessageTypeRPN_LSB AVMIDIControlChangeMessageType = 100 // AVMIDIControlChangeMessageTypeRPN_MSB: An event type that represents the registered parameter number MSB. AVMIDIControlChangeMessageTypeRPN_MSB AVMIDIControlChangeMessageType = 101 // AVMIDIControlChangeMessageTypeReleaseTime: An event type for controlling the release time. AVMIDIControlChangeMessageTypeReleaseTime AVMIDIControlChangeMessageType = 72 // AVMIDIControlChangeMessageTypeResetAllControllers: An event type for resetting all controllers to their default state. AVMIDIControlChangeMessageTypeResetAllControllers AVMIDIControlChangeMessageType = 121 // AVMIDIControlChangeMessageTypeReverbLevel: An event type for controlling the reverb level. AVMIDIControlChangeMessageTypeReverbLevel AVMIDIControlChangeMessageType = 91 // AVMIDIControlChangeMessageTypeSoft: An event type for lowering the volume of the notes. AVMIDIControlChangeMessageTypeSoft AVMIDIControlChangeMessageType = 67 // AVMIDIControlChangeMessageTypeSostenuto: An event type for switching sostenuto on or off. AVMIDIControlChangeMessageTypeSostenuto AVMIDIControlChangeMessageType = 66 // AVMIDIControlChangeMessageTypeSustain: An event type for switching a damper pedal on or off. AVMIDIControlChangeMessageTypeSustain AVMIDIControlChangeMessageType = 64 // AVMIDIControlChangeMessageTypeVibratoDelay: An event type for controlling the vibrato delay. AVMIDIControlChangeMessageTypeVibratoDelay AVMIDIControlChangeMessageType = 78 // AVMIDIControlChangeMessageTypeVibratoDepth: An event type for controlling the vibrato depth. AVMIDIControlChangeMessageTypeVibratoDepth AVMIDIControlChangeMessageType = 77 // AVMIDIControlChangeMessageTypeVibratoRate: An event type for controlling the vibrato rate. AVMIDIControlChangeMessageTypeVibratoRate AVMIDIControlChangeMessageType = 76 // AVMIDIControlChangeMessageTypeVolume: An event type for controlling the channel volume. AVMIDIControlChangeMessageTypeVolume AVMIDIControlChangeMessageType = 7 )
func (AVMIDIControlChangeMessageType) String ¶
func (e AVMIDIControlChangeMessageType) String() string
type AVMIDIMetaEvent ¶
type AVMIDIMetaEvent struct {
AVMusicEvent
}
An object that represents MIDI meta event messages.
Overview ¶
You can’t modify the size and contents of this event once you create it. This doesn’t verify that the content matches the MIDI specification.
You can only add [MIDIMetaEventTypeTempo], [MIDIMetaEventTypeSmpteOffset], or [MIDIMetaEventTypeTimeSignature] to a sequence’s tempo track.
Creating a Meta Event ¶
- AVMIDIMetaEvent.InitWithTypeData: Creates an event with a MIDI meta event type and data.
Getting the Meta Event Type ¶
- AVMIDIMetaEvent.Type: The type of meta event.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIMetaEvent
func AVMIDIMetaEventFromID ¶
func AVMIDIMetaEventFromID(id objc.ID) AVMIDIMetaEvent
AVMIDIMetaEventFromID constructs a AVMIDIMetaEvent from an objc.ID.
An object that represents MIDI meta event messages.
func NewAVMIDIMetaEvent ¶
func NewAVMIDIMetaEvent() AVMIDIMetaEvent
NewAVMIDIMetaEvent creates a new AVMIDIMetaEvent instance.
func NewMIDIMetaEventWithTypeData ¶
func NewMIDIMetaEventWithTypeData(type_ AVMIDIMetaEventType, data foundation.INSData) AVMIDIMetaEvent
Creates an event with a MIDI meta event type and data.
type: The meta event type.
data: The data that contains the contents of the meta event.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIMetaEvent/init(type:data:)
func (AVMIDIMetaEvent) Autorelease ¶
func (m AVMIDIMetaEvent) Autorelease() AVMIDIMetaEvent
Autorelease adds the receiver to the current autorelease pool.
func (AVMIDIMetaEvent) Init ¶
func (m AVMIDIMetaEvent) Init() AVMIDIMetaEvent
Init initializes the instance.
func (AVMIDIMetaEvent) InitWithTypeData ¶
func (m AVMIDIMetaEvent) InitWithTypeData(type_ AVMIDIMetaEventType, data foundation.INSData) AVMIDIMetaEvent
Creates an event with a MIDI meta event type and data.
type: The meta event type.
data: The data that contains the contents of the meta event.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIMetaEvent/init(type:data:)
func (AVMIDIMetaEvent) Type ¶
func (m AVMIDIMetaEvent) Type() AVMIDIMetaEventType
The type of meta event.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIMetaEvent/type
type AVMIDIMetaEventClass ¶
type AVMIDIMetaEventClass struct {
// contains filtered or unexported fields
}
func GetAVMIDIMetaEventClass ¶
func GetAVMIDIMetaEventClass() AVMIDIMetaEventClass
GetAVMIDIMetaEventClass returns the class object for AVMIDIMetaEvent.
func (AVMIDIMetaEventClass) Alloc ¶
func (ac AVMIDIMetaEventClass) Alloc() AVMIDIMetaEvent
Alloc allocates memory for a new instance of the class.
func (AVMIDIMetaEventClass) Class ¶
func (ac AVMIDIMetaEventClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVMIDIMetaEventType ¶
type AVMIDIMetaEventType int
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIMetaEvent/EventType
const ( // AVMIDIMetaEventTypeCopyright: An event type that represents a copyright. AVMIDIMetaEventTypeCopyright AVMIDIMetaEventType = 2 // AVMIDIMetaEventTypeCuePoint: An event type that represents a cue point. AVMIDIMetaEventTypeCuePoint AVMIDIMetaEventType = 7 // AVMIDIMetaEventTypeEndOfTrack: An event type that represents the end of the track. AVMIDIMetaEventTypeEndOfTrack AVMIDIMetaEventType = 47 // AVMIDIMetaEventTypeInstrument: An event type that represents an instrument. AVMIDIMetaEventTypeInstrument AVMIDIMetaEventType = 4 // AVMIDIMetaEventTypeKeySignature: An event type that represents a key signature. AVMIDIMetaEventTypeKeySignature AVMIDIMetaEventType = 89 // AVMIDIMetaEventTypeLyric: An event type that represents a lyric. AVMIDIMetaEventTypeLyric AVMIDIMetaEventType = 5 // AVMIDIMetaEventTypeMarker: An event type that represents a marker. AVMIDIMetaEventTypeMarker AVMIDIMetaEventType = 6 // AVMIDIMetaEventTypeMidiChannel: An event type that represents a MIDI channel. AVMIDIMetaEventTypeMidiChannel AVMIDIMetaEventType = 32 // AVMIDIMetaEventTypeMidiPort: An event type that represents a MIDI port. AVMIDIMetaEventTypeMidiPort AVMIDIMetaEventType = 33 // AVMIDIMetaEventTypeProprietaryEvent: An event type that represents a proprietary event. AVMIDIMetaEventTypeProprietaryEvent AVMIDIMetaEventType = 127 // AVMIDIMetaEventTypeSequenceNumber: An event type that represents a sequence number. AVMIDIMetaEventTypeSequenceNumber AVMIDIMetaEventType = 0 // AVMIDIMetaEventTypeSmpteOffset: An event type that represents a SMPTE time offset. AVMIDIMetaEventTypeSmpteOffset AVMIDIMetaEventType = 84 // AVMIDIMetaEventTypeTempo: An event type that represents a tempo. AVMIDIMetaEventTypeTempo AVMIDIMetaEventType = 81 // AVMIDIMetaEventTypeText: An event type that represents text. AVMIDIMetaEventTypeText AVMIDIMetaEventType = 1 // AVMIDIMetaEventTypeTimeSignature: An event type that represents a time signature. AVMIDIMetaEventTypeTimeSignature AVMIDIMetaEventType = 88 // AVMIDIMetaEventTypeTrackName: An event type that represents a track name. AVMIDIMetaEventTypeTrackName AVMIDIMetaEventType = 3 )
func (AVMIDIMetaEventType) String ¶
func (e AVMIDIMetaEventType) String() string
type AVMIDINoteEvent ¶
type AVMIDINoteEvent struct {
AVMusicEvent
}
An object that represents MIDI note on or off messages.
Creating a MIDI Note Event ¶
- AVMIDINoteEvent.InitWithChannelKeyVelocityDuration: Creates an event with a MIDI channel, key number, velocity, and duration.
Configuring a MIDI Note Event ¶
- AVMIDINoteEvent.Channel: The MIDI channel.
- AVMIDINoteEvent.SetChannel
- AVMIDINoteEvent.Key: The MIDI key number.
- AVMIDINoteEvent.SetKey
- AVMIDINoteEvent.Velocity: The MIDI velocity.
- AVMIDINoteEvent.SetVelocity
- AVMIDINoteEvent.Duration: The duration for the note, in beats.
- AVMIDINoteEvent.SetDuration
See: https://developer.apple.com/documentation/AVFAudio/AVMIDINoteEvent
func AVMIDINoteEventFromID ¶
func AVMIDINoteEventFromID(id objc.ID) AVMIDINoteEvent
AVMIDINoteEventFromID constructs a AVMIDINoteEvent from an objc.ID.
An object that represents MIDI note on or off messages.
func NewAVMIDINoteEvent ¶
func NewAVMIDINoteEvent() AVMIDINoteEvent
NewAVMIDINoteEvent creates a new AVMIDINoteEvent instance.
func NewMIDINoteEventWithChannelKeyVelocityDuration ¶
func NewMIDINoteEventWithChannelKeyVelocityDuration(channel uint32, keyNum uint32, velocity uint32, duration AVMusicTimeStamp) AVMIDINoteEvent
Creates an event with a MIDI channel, key number, velocity, and duration.
channel: The MIDI channel, between `0` and `15`.
keyNum: The MIDI key number, between `0` and `127`.
velocity: The MIDI velocity, between `0` and `127`.
duration: The duration for this note, in beats.
func (AVMIDINoteEvent) Autorelease ¶
func (m AVMIDINoteEvent) Autorelease() AVMIDINoteEvent
Autorelease adds the receiver to the current autorelease pool.
func (AVMIDINoteEvent) Channel ¶
func (m AVMIDINoteEvent) Channel() uint32
The MIDI channel.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDINoteEvent/channel
func (AVMIDINoteEvent) Duration ¶
func (m AVMIDINoteEvent) Duration() AVMusicTimeStamp
The duration for the note, in beats.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDINoteEvent/duration
func (AVMIDINoteEvent) Init ¶
func (m AVMIDINoteEvent) Init() AVMIDINoteEvent
Init initializes the instance.
func (AVMIDINoteEvent) InitWithChannelKeyVelocityDuration ¶
func (m AVMIDINoteEvent) InitWithChannelKeyVelocityDuration(channel uint32, keyNum uint32, velocity uint32, duration AVMusicTimeStamp) AVMIDINoteEvent
Creates an event with a MIDI channel, key number, velocity, and duration.
channel: The MIDI channel, between `0` and `15`.
keyNum: The MIDI key number, between `0` and `127`.
velocity: The MIDI velocity, between `0` and `127`.
duration: The duration for this note, in beats.
func (AVMIDINoteEvent) Key ¶
func (m AVMIDINoteEvent) Key() uint32
The MIDI key number.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDINoteEvent/key
func (AVMIDINoteEvent) SetChannel ¶
func (m AVMIDINoteEvent) SetChannel(value uint32)
func (AVMIDINoteEvent) SetDuration ¶
func (m AVMIDINoteEvent) SetDuration(value AVMusicTimeStamp)
func (AVMIDINoteEvent) SetKey ¶
func (m AVMIDINoteEvent) SetKey(value uint32)
func (AVMIDINoteEvent) SetVelocity ¶
func (m AVMIDINoteEvent) SetVelocity(value uint32)
func (AVMIDINoteEvent) Velocity ¶
func (m AVMIDINoteEvent) Velocity() uint32
The MIDI velocity.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDINoteEvent/velocity
type AVMIDINoteEventClass ¶
type AVMIDINoteEventClass struct {
// contains filtered or unexported fields
}
func GetAVMIDINoteEventClass ¶
func GetAVMIDINoteEventClass() AVMIDINoteEventClass
GetAVMIDINoteEventClass returns the class object for AVMIDINoteEvent.
func (AVMIDINoteEventClass) Alloc ¶
func (ac AVMIDINoteEventClass) Alloc() AVMIDINoteEvent
Alloc allocates memory for a new instance of the class.
func (AVMIDINoteEventClass) Class ¶
func (ac AVMIDINoteEventClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVMIDIPitchBendEvent ¶
type AVMIDIPitchBendEvent struct {
AVMIDIChannelEvent
}
An object that represents a MIDI pitch bend message.
Creating a Pitch Bend Event ¶
- AVMIDIPitchBendEvent.InitWithChannelValue: Creates an event with a channel and pitch bend value.
Configuring a Pitch Bend Event ¶
- AVMIDIPitchBendEvent.Value: The value of the pitch bend event.
- AVMIDIPitchBendEvent.SetValue
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPitchBendEvent
func AVMIDIPitchBendEventFromID ¶
func AVMIDIPitchBendEventFromID(id objc.ID) AVMIDIPitchBendEvent
AVMIDIPitchBendEventFromID constructs a AVMIDIPitchBendEvent from an objc.ID.
An object that represents a MIDI pitch bend message.
func NewAVMIDIPitchBendEvent ¶
func NewAVMIDIPitchBendEvent() AVMIDIPitchBendEvent
NewAVMIDIPitchBendEvent creates a new AVMIDIPitchBendEvent instance.
func NewMIDIPitchBendEventWithChannelValue ¶
func NewMIDIPitchBendEventWithChannelValue(channel uint32, value uint32) AVMIDIPitchBendEvent
Creates an event with a channel and pitch bend value.
channel: The MIDI channel for the message, between `0` and `15`.
value: The pitch bend value, between `0` and `16383`.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPitchBendEvent/init(channel:value:)
func (AVMIDIPitchBendEvent) Autorelease ¶
func (m AVMIDIPitchBendEvent) Autorelease() AVMIDIPitchBendEvent
Autorelease adds the receiver to the current autorelease pool.
func (AVMIDIPitchBendEvent) Init ¶
func (m AVMIDIPitchBendEvent) Init() AVMIDIPitchBendEvent
Init initializes the instance.
func (AVMIDIPitchBendEvent) InitWithChannelValue ¶
func (m AVMIDIPitchBendEvent) InitWithChannelValue(channel uint32, value uint32) AVMIDIPitchBendEvent
Creates an event with a channel and pitch bend value.
channel: The MIDI channel for the message, between `0` and `15`.
value: The pitch bend value, between `0` and `16383`.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPitchBendEvent/init(channel:value:)
func (AVMIDIPitchBendEvent) SetValue ¶
func (m AVMIDIPitchBendEvent) SetValue(value uint32)
func (AVMIDIPitchBendEvent) Value ¶
func (m AVMIDIPitchBendEvent) Value() uint32
The value of the pitch bend event.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPitchBendEvent/value
type AVMIDIPitchBendEventClass ¶
type AVMIDIPitchBendEventClass struct {
// contains filtered or unexported fields
}
func GetAVMIDIPitchBendEventClass ¶
func GetAVMIDIPitchBendEventClass() AVMIDIPitchBendEventClass
GetAVMIDIPitchBendEventClass returns the class object for AVMIDIPitchBendEvent.
func (AVMIDIPitchBendEventClass) Alloc ¶
func (ac AVMIDIPitchBendEventClass) Alloc() AVMIDIPitchBendEvent
Alloc allocates memory for a new instance of the class.
func (AVMIDIPitchBendEventClass) Class ¶
func (ac AVMIDIPitchBendEventClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVMIDIPlayer ¶
type AVMIDIPlayer struct {
objectivec.Object
}
An object that plays MIDI data through a system sound module.
Overview ¶
For more information about preparing your app to play audio, see Configuring your app for media playback.
Creating a MIDI player ¶
- AVMIDIPlayer.InitWithContentsOfURLSoundBankURLError: Creates a player to play a MIDI file with the specified soundbank.
- AVMIDIPlayer.InitWithDataSoundBankURLError: Creates a player to play MIDI data with the specified soundbank.
Controlling playback ¶
- AVMIDIPlayer.PrepareToPlay: Prepares the player to play the sequence by prerolling all events.
- AVMIDIPlayer.Play: Plays the MIDI sequence.
- AVMIDIPlayer.Stop: Stops playing the sequence.
- AVMIDIPlayer.Playing: A Boolean value that indicates whether the sequence is playing.
Configuring playback settings ¶
- AVMIDIPlayer.Rate: The playback rate of the player.
- AVMIDIPlayer.SetRate
Accessing player timing ¶
- AVMIDIPlayer.CurrentPosition: The current playback position, in seconds.
- AVMIDIPlayer.SetCurrentPosition
- AVMIDIPlayer.Duration: The duration, in seconds, of the currently loaded file.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPlayer
func AVMIDIPlayerFromID ¶
func AVMIDIPlayerFromID(id objc.ID) AVMIDIPlayer
AVMIDIPlayerFromID constructs a AVMIDIPlayer from an objc.ID.
An object that plays MIDI data through a system sound module.
func NewAVMIDIPlayer ¶
func NewAVMIDIPlayer() AVMIDIPlayer
NewAVMIDIPlayer creates a new AVMIDIPlayer instance.
func NewMIDIPlayerWithContentsOfURLSoundBankURLError ¶
func NewMIDIPlayerWithContentsOfURLSoundBankURLError(inURL foundation.INSURL, bankURL foundation.INSURL) (AVMIDIPlayer, error)
Creates a player to play a MIDI file with the specified soundbank.
inURL: The URL of the file to play.
bankURL: The URL of the sound bank. The sound bank must be in SoundFont2 or DLS format. In macOS, you can pass nil for the bank URL argument to use the default sound bank. In iOS, you must always pass a valid bank file. // nil: https://developer.apple.com/documentation/ObjectiveC/nil-227m0
Return Value ¶
A new MIDI player, or nil if an error occurred.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPlayer/init(contentsOf:soundBankURL:)
func NewMIDIPlayerWithDataSoundBankURLError ¶
func NewMIDIPlayerWithDataSoundBankURLError(data foundation.INSData, bankURL foundation.INSURL) (AVMIDIPlayer, error)
Creates a player to play MIDI data with the specified soundbank.
data: The data to play.
bankURL: The URL of the sound bank. The sound bank must be a SoundFont2 or DLS bank. In macOS, you can pass nil for the bank URL argument to use the default sound bank. In iOS, you must always pass a valid bank file. // nil: https://developer.apple.com/documentation/ObjectiveC/nil-227m0
Return Value ¶
A new MIDI player, or nil if an error occurred.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPlayer/init(data:soundBankURL:)
func (AVMIDIPlayer) Autorelease ¶
func (m AVMIDIPlayer) Autorelease() AVMIDIPlayer
Autorelease adds the receiver to the current autorelease pool.
func (AVMIDIPlayer) CurrentPosition ¶
func (m AVMIDIPlayer) CurrentPosition() float64
The current playback position, in seconds.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPlayer/currentPosition
func (AVMIDIPlayer) Duration ¶
func (m AVMIDIPlayer) Duration() float64
The duration, in seconds, of the currently loaded file.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPlayer/duration
func (AVMIDIPlayer) InitWithContentsOfURLSoundBankURLError ¶
func (m AVMIDIPlayer) InitWithContentsOfURLSoundBankURLError(inURL foundation.INSURL, bankURL foundation.INSURL) (AVMIDIPlayer, error)
Creates a player to play a MIDI file with the specified soundbank.
inURL: The URL of the file to play.
bankURL: The URL of the sound bank. The sound bank must be in SoundFont2 or DLS format. In macOS, you can pass nil for the bank URL argument to use the default sound bank. In iOS, you must always pass a valid bank file. // nil: https://developer.apple.com/documentation/ObjectiveC/nil-227m0
Return Value ¶
A new MIDI player, or nil if an error occurred.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPlayer/init(contentsOf:soundBankURL:)
func (AVMIDIPlayer) InitWithDataSoundBankURLError ¶
func (m AVMIDIPlayer) InitWithDataSoundBankURLError(data foundation.INSData, bankURL foundation.INSURL) (AVMIDIPlayer, error)
Creates a player to play MIDI data with the specified soundbank.
data: The data to play.
bankURL: The URL of the sound bank. The sound bank must be a SoundFont2 or DLS bank. In macOS, you can pass nil for the bank URL argument to use the default sound bank. In iOS, you must always pass a valid bank file. // nil: https://developer.apple.com/documentation/ObjectiveC/nil-227m0
Return Value ¶
A new MIDI player, or nil if an error occurred.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPlayer/init(data:soundBankURL:)
func (AVMIDIPlayer) Play ¶
func (m AVMIDIPlayer) Play(completionHandler ErrorHandler)
Plays the MIDI sequence.
completionHandler: A closure the system calls when playback completes.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPlayer/play(_:)
func (AVMIDIPlayer) Playing ¶
func (m AVMIDIPlayer) Playing() bool
A Boolean value that indicates whether the sequence is playing.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPlayer/isPlaying
func (AVMIDIPlayer) PrepareToPlay ¶
func (m AVMIDIPlayer) PrepareToPlay()
Prepares the player to play the sequence by prerolling all events.
Discussion ¶
The system automatically calls this method on playback, but calling it in advance minimizes the delay between calling [Play] and the start of sound output.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPlayer/prepareToPlay()
func (AVMIDIPlayer) Rate ¶
func (m AVMIDIPlayer) Rate() float32
The playback rate of the player.
Discussion ¶
The default value is `1.0,` the standard playback rate.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPlayer/rate
func (AVMIDIPlayer) SetCurrentPosition ¶
func (m AVMIDIPlayer) SetCurrentPosition(value float64)
func (AVMIDIPlayer) SetRate ¶
func (m AVMIDIPlayer) SetRate(value float32)
func (AVMIDIPlayer) Stop ¶
func (m AVMIDIPlayer) Stop()
Stops playing the sequence.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPlayer/stop()
type AVMIDIPlayerClass ¶
type AVMIDIPlayerClass struct {
// contains filtered or unexported fields
}
func GetAVMIDIPlayerClass ¶
func GetAVMIDIPlayerClass() AVMIDIPlayerClass
GetAVMIDIPlayerClass returns the class object for AVMIDIPlayer.
func (AVMIDIPlayerClass) Alloc ¶
func (ac AVMIDIPlayerClass) Alloc() AVMIDIPlayer
Alloc allocates memory for a new instance of the class.
func (AVMIDIPlayerClass) Class ¶
func (ac AVMIDIPlayerClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVMIDIPlayerCompletionHandler ¶
type AVMIDIPlayerCompletionHandler = func()
AVMIDIPlayerCompletionHandler is a callback the system invokes when MIDI playback completes.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPlayerCompletionHandler
type AVMIDIPolyPressureEvent ¶
type AVMIDIPolyPressureEvent struct {
AVMIDIChannelEvent
}
An object that represents a MIDI poly or key pressure event.
Creating a Poly Pressure Event ¶
- AVMIDIPolyPressureEvent.InitWithChannelKeyPressure: Creates an event with a channel, MIDI key number, and a key pressure value.
Configuring a Poly Pressure Event ¶
- AVMIDIPolyPressureEvent.Key: The MIDI key number.
- AVMIDIPolyPressureEvent.SetKey
- AVMIDIPolyPressureEvent.Pressure: The poly pressure value for the requested key.
- AVMIDIPolyPressureEvent.SetPressure
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPolyPressureEvent
func AVMIDIPolyPressureEventFromID ¶
func AVMIDIPolyPressureEventFromID(id objc.ID) AVMIDIPolyPressureEvent
AVMIDIPolyPressureEventFromID constructs a AVMIDIPolyPressureEvent from an objc.ID.
An object that represents a MIDI poly or key pressure event.
func NewAVMIDIPolyPressureEvent ¶
func NewAVMIDIPolyPressureEvent() AVMIDIPolyPressureEvent
NewAVMIDIPolyPressureEvent creates a new AVMIDIPolyPressureEvent instance.
func NewMIDIPolyPressureEventWithChannelKeyPressure ¶
func NewMIDIPolyPressureEventWithChannelKeyPressure(channel uint32, key uint32, pressure uint32) AVMIDIPolyPressureEvent
Creates an event with a channel, MIDI key number, and a key pressure value.
channel: The MIDI channel for the message, between `0` and `15`.
key: The MIDI key number to apply the pressure to.
pressure: The poly pressure value.
func (AVMIDIPolyPressureEvent) Autorelease ¶
func (m AVMIDIPolyPressureEvent) Autorelease() AVMIDIPolyPressureEvent
Autorelease adds the receiver to the current autorelease pool.
func (AVMIDIPolyPressureEvent) Init ¶
func (m AVMIDIPolyPressureEvent) Init() AVMIDIPolyPressureEvent
Init initializes the instance.
func (AVMIDIPolyPressureEvent) InitWithChannelKeyPressure ¶
func (m AVMIDIPolyPressureEvent) InitWithChannelKeyPressure(channel uint32, key uint32, pressure uint32) AVMIDIPolyPressureEvent
Creates an event with a channel, MIDI key number, and a key pressure value.
channel: The MIDI channel for the message, between `0` and `15`.
key: The MIDI key number to apply the pressure to.
pressure: The poly pressure value.
func (AVMIDIPolyPressureEvent) Key ¶
func (m AVMIDIPolyPressureEvent) Key() uint32
The MIDI key number.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPolyPressureEvent/key
func (AVMIDIPolyPressureEvent) Pressure ¶
func (m AVMIDIPolyPressureEvent) Pressure() uint32
The poly pressure value for the requested key.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPolyPressureEvent/pressure
func (AVMIDIPolyPressureEvent) SetKey ¶
func (m AVMIDIPolyPressureEvent) SetKey(value uint32)
func (AVMIDIPolyPressureEvent) SetPressure ¶
func (m AVMIDIPolyPressureEvent) SetPressure(value uint32)
type AVMIDIPolyPressureEventClass ¶
type AVMIDIPolyPressureEventClass struct {
// contains filtered or unexported fields
}
func GetAVMIDIPolyPressureEventClass ¶
func GetAVMIDIPolyPressureEventClass() AVMIDIPolyPressureEventClass
GetAVMIDIPolyPressureEventClass returns the class object for AVMIDIPolyPressureEvent.
func (AVMIDIPolyPressureEventClass) Alloc ¶
func (ac AVMIDIPolyPressureEventClass) Alloc() AVMIDIPolyPressureEvent
Alloc allocates memory for a new instance of the class.
func (AVMIDIPolyPressureEventClass) Class ¶
func (ac AVMIDIPolyPressureEventClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVMIDIProgramChangeEvent ¶
type AVMIDIProgramChangeEvent struct {
AVMIDIChannelEvent
}
An object that represents a MIDI program or patch change message.
Overview ¶
The effect of this message depends on the AVMusicTrack destination audio unit.
Creating a Program Change Event ¶
- AVMIDIProgramChangeEvent.InitWithChannelProgramNumber: Creates a program change event with a channel and program number.
Configuring a Program Change Event ¶
- AVMIDIProgramChangeEvent.ProgramNumber: The MIDI program number.
- AVMIDIProgramChangeEvent.SetProgramNumber
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIProgramChangeEvent
func AVMIDIProgramChangeEventFromID ¶
func AVMIDIProgramChangeEventFromID(id objc.ID) AVMIDIProgramChangeEvent
AVMIDIProgramChangeEventFromID constructs a AVMIDIProgramChangeEvent from an objc.ID.
An object that represents a MIDI program or patch change message.
func NewAVMIDIProgramChangeEvent ¶
func NewAVMIDIProgramChangeEvent() AVMIDIProgramChangeEvent
NewAVMIDIProgramChangeEvent creates a new AVMIDIProgramChangeEvent instance.
func NewMIDIProgramChangeEventWithChannelProgramNumber ¶
func NewMIDIProgramChangeEventWithChannelProgramNumber(channel uint32, programNumber uint32) AVMIDIProgramChangeEvent
Creates a program change event with a channel and program number.
channel: The MIDI channel for the message, between `0` and `15`.
programNumber: The program number to send, between `0` and `127`.
Discussion ¶
The instrument this chooses depends on [MIDIControlChangeMessageTypeBankSelect] events sent prior to this event.
func (AVMIDIProgramChangeEvent) Autorelease ¶
func (m AVMIDIProgramChangeEvent) Autorelease() AVMIDIProgramChangeEvent
Autorelease adds the receiver to the current autorelease pool.
func (AVMIDIProgramChangeEvent) Init ¶
func (m AVMIDIProgramChangeEvent) Init() AVMIDIProgramChangeEvent
Init initializes the instance.
func (AVMIDIProgramChangeEvent) InitWithChannelProgramNumber ¶
func (m AVMIDIProgramChangeEvent) InitWithChannelProgramNumber(channel uint32, programNumber uint32) AVMIDIProgramChangeEvent
Creates a program change event with a channel and program number.
channel: The MIDI channel for the message, between `0` and `15`.
programNumber: The program number to send, between `0` and `127`.
Discussion ¶
The instrument this chooses depends on [MIDIControlChangeMessageTypeBankSelect] events sent prior to this event.
func (AVMIDIProgramChangeEvent) ProgramNumber ¶
func (m AVMIDIProgramChangeEvent) ProgramNumber() uint32
The MIDI program number.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIProgramChangeEvent/programNumber
func (AVMIDIProgramChangeEvent) SetProgramNumber ¶
func (m AVMIDIProgramChangeEvent) SetProgramNumber(value uint32)
type AVMIDIProgramChangeEventClass ¶
type AVMIDIProgramChangeEventClass struct {
// contains filtered or unexported fields
}
func GetAVMIDIProgramChangeEventClass ¶
func GetAVMIDIProgramChangeEventClass() AVMIDIProgramChangeEventClass
GetAVMIDIProgramChangeEventClass returns the class object for AVMIDIProgramChangeEvent.
func (AVMIDIProgramChangeEventClass) Alloc ¶
func (ac AVMIDIProgramChangeEventClass) Alloc() AVMIDIProgramChangeEvent
Alloc allocates memory for a new instance of the class.
func (AVMIDIProgramChangeEventClass) Class ¶
func (ac AVMIDIProgramChangeEventClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVMIDISysexEvent ¶
type AVMIDISysexEvent struct {
AVMusicEvent
}
An object that represents a MIDI system exclusive message.
Overview ¶
You can’t modify the size and contents of this event once you create it.
Creates a System Event ¶
- AVMIDISysexEvent.InitWithData: Creates a system event with the data you specify.
Getting the Size of the Event ¶
- AVMIDISysexEvent.SizeInBytes: The size of the data that this event contains.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDISysexEvent
func AVMIDISysexEventFromID ¶
func AVMIDISysexEventFromID(id objc.ID) AVMIDISysexEvent
AVMIDISysexEventFromID constructs a AVMIDISysexEvent from an objc.ID.
An object that represents a MIDI system exclusive message.
func NewAVMIDISysexEvent ¶
func NewAVMIDISysexEvent() AVMIDISysexEvent
NewAVMIDISysexEvent creates a new AVMIDISysexEvent instance.
func NewMIDISysexEventWithData ¶
func NewMIDISysexEventWithData(data foundation.INSData) AVMIDISysexEvent
Creates a system event with the data you specify.
data: The data that contains the contents of the system event.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDISysexEvent/init(data:)
func (AVMIDISysexEvent) Autorelease ¶
func (m AVMIDISysexEvent) Autorelease() AVMIDISysexEvent
Autorelease adds the receiver to the current autorelease pool.
func (AVMIDISysexEvent) Init ¶
func (m AVMIDISysexEvent) Init() AVMIDISysexEvent
Init initializes the instance.
func (AVMIDISysexEvent) InitWithData ¶
func (m AVMIDISysexEvent) InitWithData(data foundation.INSData) AVMIDISysexEvent
Creates a system event with the data you specify.
data: The data that contains the contents of the system event.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDISysexEvent/init(data:)
func (AVMIDISysexEvent) SizeInBytes ¶
func (m AVMIDISysexEvent) SizeInBytes() uint32
The size of the data that this event contains.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDISysexEvent/sizeInBytes
type AVMIDISysexEventClass ¶
type AVMIDISysexEventClass struct {
// contains filtered or unexported fields
}
func GetAVMIDISysexEventClass ¶
func GetAVMIDISysexEventClass() AVMIDISysexEventClass
GetAVMIDISysexEventClass returns the class object for AVMIDISysexEvent.
func (AVMIDISysexEventClass) Alloc ¶
func (ac AVMIDISysexEventClass) Alloc() AVMIDISysexEvent
Alloc allocates memory for a new instance of the class.
func (AVMIDISysexEventClass) Class ¶
func (ac AVMIDISysexEventClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVMusicEvent ¶
type AVMusicEvent struct {
objectivec.Object
}
A base class for the events you associate with a music track.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicEvent
func AVMusicEventFromID ¶
func AVMusicEventFromID(id objc.ID) AVMusicEvent
AVMusicEventFromID constructs a AVMusicEvent from an objc.ID.
A base class for the events you associate with a music track.
func NewAVMusicEvent ¶
func NewAVMusicEvent() AVMusicEvent
NewAVMusicEvent creates a new AVMusicEvent instance.
func (AVMusicEvent) Autorelease ¶
func (m AVMusicEvent) Autorelease() AVMusicEvent
Autorelease adds the receiver to the current autorelease pool.
type AVMusicEventClass ¶
type AVMusicEventClass struct {
// contains filtered or unexported fields
}
func GetAVMusicEventClass ¶
func GetAVMusicEventClass() AVMusicEventClass
GetAVMusicEventClass returns the class object for AVMusicEvent.
func (AVMusicEventClass) Alloc ¶
func (ac AVMusicEventClass) Alloc() AVMusicEvent
Alloc allocates memory for a new instance of the class.
func (AVMusicEventClass) Class ¶
func (ac AVMusicEventClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVMusicEventEnumerationBlock ¶
type AVMusicEventEnumerationBlock = func(AVMusicEvent, []float64, *bool)
AVMusicEventEnumerationBlock is a type you use to enumerate and remove music events, if necessary.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicEventEnumerationBlock
type AVMusicSequenceLoadOptions ¶
type AVMusicSequenceLoadOptions uint
See: https://developer.apple.com/documentation/AVFAudio/AVMusicSequenceLoadOptions
const ( // AVMusicSequenceLoadSMF_ChannelsToTracks: An option that represents data on different MIDI channels mapped to multiple tracks. AVMusicSequenceLoadSMF_ChannelsToTracks AVMusicSequenceLoadOptions = 1 // AVMusicSequenceLoadSMF_PreserveTracks: An option that preserves the tracks as they are. AVMusicSequenceLoadSMF_PreserveTracks AVMusicSequenceLoadOptions = 0 )
func (AVMusicSequenceLoadOptions) String ¶
func (e AVMusicSequenceLoadOptions) String() string
type AVMusicTimeStamp ¶
type AVMusicTimeStamp = float64
AVMusicTimeStamp is a fractional number of beats.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTimeStamp
type AVMusicTrack ¶
type AVMusicTrack struct {
objectivec.Object
}
A collection of music events that you can offset, set to a muted state, modify independently from other track events, and send to a specified destination.
Configuring Music Track Properties ¶
- AVMusicTrack.Muted: A Boolean value that indicates whether the track is in a muted state.
- AVMusicTrack.SetMuted
- AVMusicTrack.Soloed: A Boolean value that indicates whether the track is in a soloed state.
- AVMusicTrack.SetSoloed
- AVMusicTrack.OffsetTime: The offset of the track’s start time, in beats.
- AVMusicTrack.SetOffsetTime
- AVMusicTrack.TimeResolution: The time resolution value for the sequence, in ticks (pulses) per quarter note.
- AVMusicTrack.UsesAutomatedParameters: A Boolean value that indicates whether the track is an automation track.
- AVMusicTrack.SetUsesAutomatedParameters
Configuring the Track Duration ¶
- AVMusicTrack.LengthInBeats: The total duration of the track, in beats.
- AVMusicTrack.SetLengthInBeats
- AVMusicTrack.LengthInSeconds: The total duration of the track, in seconds.
- AVMusicTrack.SetLengthInSeconds
Configuring the Track Destinations ¶
- AVMusicTrack.DestinationAudioUnit: The audio unit that receives the track’s events.
- AVMusicTrack.SetDestinationAudioUnit
- AVMusicTrack.DestinationMIDIEndpoint: The MIDI endpoint you specify as the track’s target.
- AVMusicTrack.SetDestinationMIDIEndpoint
Configuring the Looping State ¶
- AVMusicTrack.LoopingEnabled: A Boolean value that indicates whether the track is in a looping state.
- AVMusicTrack.SetLoopingEnabled
- AVMusicTrack.LoopRange: The timestamp range for the loop, in beats.
- AVMusicTrack.SetLoopRange
- AVMusicTrack.NumberOfLoops: The number of times the track’s loop repeats.
- AVMusicTrack.SetNumberOfLoops
Adding and Clearing Events ¶
- AVMusicTrack.AddEventAtBeat: Adds a music event to a track at the time you specify.
- AVMusicTrack.MoveEventsInRangeByAmount: Moves the beat location of all events in the given beat range by the amount you specify.
- AVMusicTrack.ClearEventsInRange: Removes all events in the given beat range from the music track.
Cutting and Copying Events ¶
- AVMusicTrack.CutEventsInRange: Splices all events in the beat range from the music track.
- AVMusicTrack.CopyEventsInRangeFromTrackInsertAtBeat: Copies the events from the source track and splices them into the current music track.
- AVMusicTrack.CopyAndMergeEventsInRangeFromTrackMergeAtBeat: Copies the events from the source track and merges them into the current music track.
Iterating Over Events ¶
- AVMusicTrack.EnumerateEventsInRangeUsingBlock: Iterates through the music events within the track.
Getting the End of Track Timestamp ¶
- AVMusicTrack.AVMusicTimeStampEndOfTrack: A timestamp you use to access all events in a music track through a beat range.
- AVMusicTrack.SetAVMusicTimeStampEndOfTrack
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrack
func AVMusicTrackFromID ¶
func AVMusicTrackFromID(id objc.ID) AVMusicTrack
AVMusicTrackFromID constructs a AVMusicTrack from an objc.ID.
A collection of music events that you can offset, set to a muted state, modify independently from other track events, and send to a specified destination.
func NewAVMusicTrack ¶
func NewAVMusicTrack() AVMusicTrack
NewAVMusicTrack creates a new AVMusicTrack instance.
func (AVMusicTrack) AVMusicTimeStampEndOfTrack ¶
func (m AVMusicTrack) AVMusicTimeStampEndOfTrack() float64
A timestamp you use to access all events in a music track through a beat range.
See: https://developer.apple.com/documentation/avfaudio/avmusictimestampendoftrack
func (AVMusicTrack) AddEventAtBeat ¶
func (m AVMusicTrack) AddEventAtBeat(event IAVMusicEvent, beat AVMusicTimeStamp)
Adds a music event to a track at the time you specify.
event: The event to add.
beat: The time to add the event at.
Discussion ¶
The system copies event contents into the track, so you can add the same event at different timestamps. You can’t add all AVMusicEvent subclasses to a track.
- You can only add AVExtendedTempoEvent and AVMIDIMetaEvent with certain AVMIDIMetaEvent.EventType to a sequencer’s tempo track. - You can add AVParameterEvent to automation tracks. - You can’t add other event subclasses to tempo or automation tracks.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrack/addEvent(_:at:)
func (AVMusicTrack) Autorelease ¶
func (m AVMusicTrack) Autorelease() AVMusicTrack
Autorelease adds the receiver to the current autorelease pool.
func (AVMusicTrack) ClearEventsInRange ¶
func (m AVMusicTrack) ClearEventsInRange(range_ AVBeatRange)
Removes all events in the given beat range from the music track.
range: The range of beats.
Discussion ¶
The system won’t modify the events outside of the range you specify.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrack/clearEvents(in:)
func (AVMusicTrack) CopyAndMergeEventsInRangeFromTrackMergeAtBeat ¶
func (m AVMusicTrack) CopyAndMergeEventsInRangeFromTrackMergeAtBeat(range_ AVBeatRange, sourceTrack IAVMusicTrack, mergeStartBeat AVMusicTimeStamp)
Copies the events from the source track and merges them into the current music track.
range: The range of beats.
sourceTrack: The music track to copy the events from.
mergeStartBeat: The start beat where the copied events merge into.
Discussion ¶
The system won’t modify events originally at or past the start beat. Copying events from track to track follows the same type-exclusion rules as adding events.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrack/copyAndMergeEvents(in:from:mergeAt:)
func (AVMusicTrack) CopyEventsInRangeFromTrackInsertAtBeat ¶
func (m AVMusicTrack) CopyEventsInRangeFromTrackInsertAtBeat(range_ AVBeatRange, sourceTrack IAVMusicTrack, insertStartBeat AVMusicTimeStamp)
Copies the events from the source track and splices them into the current music track.
range: The range of beats.
sourceTrack: The music track to copy the events from.
insertStartBeat: The start beat to splice the events into.
Discussion ¶
All events originally at or past the insertion beat shift forward by the duration of the copied-in range.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrack/copyEvents(in:from:insertAt:)
func (AVMusicTrack) CutEventsInRange ¶
func (m AVMusicTrack) CutEventsInRange(range_ AVBeatRange)
Splices all events in the beat range from the music track.
range: The range of beats.
Discussion ¶
All events past the end of the range you specify shift backward by the duration of the range.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrack/cutEvents(in:)
func (AVMusicTrack) DestinationAudioUnit ¶
func (m AVMusicTrack) DestinationAudioUnit() IAVAudioUnit
The audio unit that receives the track’s events.
Discussion ¶
This property and a [DestinationMIDIEndpoint] are mutually exclusive. You must attach the audio to an audio engine to receive events. The track must be part of the AVAudioSequencer you associate with the same engine. When playing, the track sends it’s events to that AVAudioUnit. You can’t change the destination audio unit while the track’s sequence is in a playing state.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrack/destinationAudioUnit
func (AVMusicTrack) DestinationMIDIEndpoint ¶
func (m AVMusicTrack) DestinationMIDIEndpoint() objectivec.IObject
The MIDI endpoint you specify as the track’s target.
Discussion ¶
This property and a [DestinationAudioUnit] are mutually exclusive. Setting this property removes the track’s reference to an AVAudioUnit destination. When playing, the track sends events to the MIDI endpoint. For more information, see MIDIDestinationCreate(_:_:_:_:_:). You can’t change the endpoint while the track’s sequence is in a playing state.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrack/destinationMIDIEndpoint
func (AVMusicTrack) EnumerateEventsInRangeUsingBlock ¶
func (m AVMusicTrack) EnumerateEventsInRangeUsingBlock(range_ AVBeatRange, block AVMusicEventEnumerationBlock)
Iterates through the music events within the track.
range: The range to iterate through.
block: The block to call for each event.
Discussion ¶
Examine each event the block returns by using isKind(of:) to determine the subclass, and then cast and access it accordingly.
The iteration may continue after removing an event.
The event object returned through the block won’t be the same instances you add to the AVMusicTrack, though the content is identical.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrack/enumerateEvents(in:using:)
func (AVMusicTrack) LengthInBeats ¶
func (m AVMusicTrack) LengthInBeats() AVMusicTimeStamp
The total duration of the track, in beats.
Discussion ¶
This property returns the beat of the last event in the track, plus any additional time that’s necessary to fade out the ending notes, or to round a loop point to a musical bar.
If the user doesn’t set this value, the track length always adjusts to the end of the last active event in a track, and adjusts dynamically as the user adds or removes events.
This property returns the maximum of the user-set track length or the calculated length.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrack/lengthInBeats
func (AVMusicTrack) LengthInSeconds ¶
func (m AVMusicTrack) LengthInSeconds() float64
The total duration of the track, in seconds.
Discussion ¶
This property returns the time of the last event in the track, plus any additional time that’s necessary to fade out the ending notes, or to round a loop point to a musical bar.
If the user doesn’t set this value, the track length always adjusts to the end of the last active event in a track, and adjusts dynamically as the user adds or removes events.
This property returns the maximum of the user-set track length or the calculated length.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrack/lengthInSeconds
func (AVMusicTrack) LoopRange ¶
func (m AVMusicTrack) LoopRange() AVBeatRange
The timestamp range for the loop, in beats.
Discussion ¶
You set the loop by specifying its beat range.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrack/loopRange
func (AVMusicTrack) LoopingEnabled ¶
func (m AVMusicTrack) LoopingEnabled() bool
A Boolean value that indicates whether the track is in a looping state.
Discussion ¶
If you don’t set [LoopRange], the framework loops the full track.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrack/isLoopingEnabled
func (AVMusicTrack) MoveEventsInRangeByAmount ¶
func (m AVMusicTrack) MoveEventsInRangeByAmount(range_ AVBeatRange, beatAmount AVMusicTimeStamp)
Moves the beat location of all events in the given beat range by the amount you specify.
range: The range of beats.
beatAmount: The amount of beats to shift each event.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrack/moveEvents(in:by:)
func (AVMusicTrack) Muted ¶
func (m AVMusicTrack) Muted() bool
A Boolean value that indicates whether the track is in a muted state.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrack/isMuted
func (AVMusicTrack) NumberOfLoops ¶
func (m AVMusicTrack) NumberOfLoops() int
The number of times the track’s loop repeats.
Discussion ¶
Use the value [MusicTrackLoopCountForever] to loop the track forever. Otherwise, valid values start at `1`.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrack/numberOfLoops
func (AVMusicTrack) OffsetTime ¶
func (m AVMusicTrack) OffsetTime() AVMusicTimeStamp
The offset of the track’s start time, in beats.
Discussion ¶
By default, this value is `0`.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrack/offsetTime
func (AVMusicTrack) SetAVMusicTimeStampEndOfTrack ¶
func (m AVMusicTrack) SetAVMusicTimeStampEndOfTrack(value float64)
func (AVMusicTrack) SetDestinationAudioUnit ¶
func (m AVMusicTrack) SetDestinationAudioUnit(value IAVAudioUnit)
func (AVMusicTrack) SetDestinationMIDIEndpoint ¶
func (m AVMusicTrack) SetDestinationMIDIEndpoint(value objectivec.IObject)
func (AVMusicTrack) SetLengthInBeats ¶
func (m AVMusicTrack) SetLengthInBeats(value AVMusicTimeStamp)
func (AVMusicTrack) SetLengthInSeconds ¶
func (m AVMusicTrack) SetLengthInSeconds(value float64)
func (AVMusicTrack) SetLoopRange ¶
func (m AVMusicTrack) SetLoopRange(value AVBeatRange)
func (AVMusicTrack) SetLoopingEnabled ¶
func (m AVMusicTrack) SetLoopingEnabled(value bool)
func (AVMusicTrack) SetMuted ¶
func (m AVMusicTrack) SetMuted(value bool)
func (AVMusicTrack) SetNumberOfLoops ¶
func (m AVMusicTrack) SetNumberOfLoops(value int)
func (AVMusicTrack) SetOffsetTime ¶
func (m AVMusicTrack) SetOffsetTime(value AVMusicTimeStamp)
func (AVMusicTrack) SetSoloed ¶
func (m AVMusicTrack) SetSoloed(value bool)
func (AVMusicTrack) SetUsesAutomatedParameters ¶
func (m AVMusicTrack) SetUsesAutomatedParameters(value bool)
func (AVMusicTrack) Soloed ¶
func (m AVMusicTrack) Soloed() bool
A Boolean value that indicates whether the track is in a soloed state.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrack/isSoloed
func (AVMusicTrack) TimeResolution ¶
func (m AVMusicTrack) TimeResolution() uint
The time resolution value for the sequence, in ticks (pulses) per quarter note.
Discussion ¶
If you use a MIDI file to construct the containing sequence, the resolution is the contents of the file. If you want to keep a time resolution when writing a new file, retrieve this value and then specify it when writing to an audio sequencer. It doesn’t affect the rendering or notion of time of the sequence — only it’s MIDI file representation.
By default, the framework sets this value to `480` when creating the sequence manually, or to a value from a MIDI file if you use it to create the sequence.
You can only retrieve this value from the tempo track.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrack/timeResolution
func (AVMusicTrack) UsesAutomatedParameters ¶
func (m AVMusicTrack) UsesAutomatedParameters() bool
A Boolean value that indicates whether the track is an automation track.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrack/usesAutomatedParameters
type AVMusicTrackClass ¶
type AVMusicTrackClass struct {
// contains filtered or unexported fields
}
func GetAVMusicTrackClass ¶
func GetAVMusicTrackClass() AVMusicTrackClass
GetAVMusicTrackClass returns the class object for AVMusicTrack.
func (AVMusicTrackClass) Alloc ¶
func (ac AVMusicTrackClass) Alloc() AVMusicTrack
Alloc allocates memory for a new instance of the class.
func (AVMusicTrackClass) Class ¶
func (ac AVMusicTrackClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVMusicTrackLoopCount ¶
type AVMusicTrackLoopCount int
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrackLoopCount
const ( // AVMusicTrackLoopCountForever: A track that loops forever. AVMusicTrackLoopCountForever AVMusicTrackLoopCount = -1 )
func (AVMusicTrackLoopCount) String ¶
func (e AVMusicTrackLoopCount) String() string
type AVMusicUserEvent ¶
type AVMusicUserEvent struct {
AVMusicEvent
}
An object that represents a custom user message.
Overview ¶
When playback of an AVMusicTrack reaches this event, the system calls the track’s callback. You can’t modify the size and contents of an AVMusicUserEvent once you create it.
Creating a User Event ¶
- AVMusicUserEvent.InitWithData: Creates a user event with the data you specify.
Inspecting a User Event ¶
- AVMusicUserEvent.SizeInBytes: The size of the data that the user event represents.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicUserEvent
func AVMusicUserEventFromID ¶
func AVMusicUserEventFromID(id objc.ID) AVMusicUserEvent
AVMusicUserEventFromID constructs a AVMusicUserEvent from an objc.ID.
An object that represents a custom user message.
func NewAVMusicUserEvent ¶
func NewAVMusicUserEvent() AVMusicUserEvent
NewAVMusicUserEvent creates a new AVMusicUserEvent instance.
func NewMusicUserEventWithData ¶
func NewMusicUserEventWithData(data foundation.INSData) AVMusicUserEvent
Creates a user event with the data you specify.
data: The contents a music track returns on callback.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicUserEvent/init(data:)
func (AVMusicUserEvent) Autorelease ¶
func (m AVMusicUserEvent) Autorelease() AVMusicUserEvent
Autorelease adds the receiver to the current autorelease pool.
func (AVMusicUserEvent) Init ¶
func (m AVMusicUserEvent) Init() AVMusicUserEvent
Init initializes the instance.
func (AVMusicUserEvent) InitWithData ¶
func (m AVMusicUserEvent) InitWithData(data foundation.INSData) AVMusicUserEvent
Creates a user event with the data you specify.
data: The contents a music track returns on callback.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicUserEvent/init(data:)
func (AVMusicUserEvent) SizeInBytes ¶
func (m AVMusicUserEvent) SizeInBytes() uint32
The size of the data that the user event represents.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicUserEvent/sizeInBytes
type AVMusicUserEventClass ¶
type AVMusicUserEventClass struct {
// contains filtered or unexported fields
}
func GetAVMusicUserEventClass ¶
func GetAVMusicUserEventClass() AVMusicUserEventClass
GetAVMusicUserEventClass returns the class object for AVMusicUserEvent.
func (AVMusicUserEventClass) Alloc ¶
func (ac AVMusicUserEventClass) Alloc() AVMusicUserEvent
Alloc allocates memory for a new instance of the class.
func (AVMusicUserEventClass) Class ¶
func (ac AVMusicUserEventClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVParameterEvent ¶
type AVParameterEvent struct {
AVMusicEvent
}
An object that represents a parameter event on a music track’s destination.
Overview ¶
When you configure an audio unit as the destination for an AVMusicTrack that contains this event, you can schedule and automate parameter changes.
When the track is playing as part of a sequence, the destination audio unit receives set-parameter messages whose values change smoothly along a linear ramp between each event’s beat location.
If you add an event to an empty, non-automation track, the track becomes an automation track.
Creating a Parameter Event ¶
- AVParameterEvent.InitWithParameterIDScopeElementValue: Creates an event with a parameter identifier, scope, element, and value for the parameter to set.
Configuring a Parameter Event ¶
- AVParameterEvent.ParameterID: The identifier of the parameter.
- AVParameterEvent.SetParameterID
- AVParameterEvent.Scope: The audio unit scope for the parameter.
- AVParameterEvent.SetScope
- AVParameterEvent.Element: The element index in the scope.
- AVParameterEvent.SetElement
- AVParameterEvent.Value: The value of the parameter to set.
- AVParameterEvent.SetValue
See: https://developer.apple.com/documentation/AVFAudio/AVParameterEvent
func AVParameterEventFromID ¶
func AVParameterEventFromID(id objc.ID) AVParameterEvent
AVParameterEventFromID constructs a AVParameterEvent from an objc.ID.
An object that represents a parameter event on a music track’s destination.
func NewAVParameterEvent ¶
func NewAVParameterEvent() AVParameterEvent
NewAVParameterEvent creates a new AVParameterEvent instance.
func NewParameterEventWithParameterIDScopeElementValue ¶
func NewParameterEventWithParameterIDScopeElementValue(parameterID uint32, scope uint32, element uint32, value float32) AVParameterEvent
Creates an event with a parameter identifier, scope, element, and value for the parameter to set.
parameterID: The identifier of the parameter.
scope: The audio unit scope for the parameter.
element: The element index in the scope.
value: The value of the parameter to set.
Discussion ¶
For more information about the parameters, see AudioUnitParameterID, AudioUnitScope, and AudioUnitElement. The valid range of values depend on the parameter you set.
func (AVParameterEvent) Autorelease ¶
func (p AVParameterEvent) Autorelease() AVParameterEvent
Autorelease adds the receiver to the current autorelease pool.
func (AVParameterEvent) Element ¶
func (p AVParameterEvent) Element() uint32
The element index in the scope.
See: https://developer.apple.com/documentation/AVFAudio/AVParameterEvent/element
func (AVParameterEvent) Init ¶
func (p AVParameterEvent) Init() AVParameterEvent
Init initializes the instance.
func (AVParameterEvent) InitWithParameterIDScopeElementValue ¶
func (p AVParameterEvent) InitWithParameterIDScopeElementValue(parameterID uint32, scope uint32, element uint32, value float32) AVParameterEvent
Creates an event with a parameter identifier, scope, element, and value for the parameter to set.
parameterID: The identifier of the parameter.
scope: The audio unit scope for the parameter.
element: The element index in the scope.
value: The value of the parameter to set.
Discussion ¶
For more information about the parameters, see AudioUnitParameterID, AudioUnitScope, and AudioUnitElement. The valid range of values depend on the parameter you set.
func (AVParameterEvent) ParameterID ¶
func (p AVParameterEvent) ParameterID() uint32
The identifier of the parameter.
See: https://developer.apple.com/documentation/AVFAudio/AVParameterEvent/parameterID
func (AVParameterEvent) Scope ¶
func (p AVParameterEvent) Scope() uint32
The audio unit scope for the parameter.
See: https://developer.apple.com/documentation/AVFAudio/AVParameterEvent/scope
func (AVParameterEvent) SetElement ¶
func (p AVParameterEvent) SetElement(value uint32)
func (AVParameterEvent) SetParameterID ¶
func (p AVParameterEvent) SetParameterID(value uint32)
func (AVParameterEvent) SetScope ¶
func (p AVParameterEvent) SetScope(value uint32)
func (AVParameterEvent) SetValue ¶
func (p AVParameterEvent) SetValue(value float32)
func (AVParameterEvent) Value ¶
func (p AVParameterEvent) Value() float32
The value of the parameter to set.
See: https://developer.apple.com/documentation/AVFAudio/AVParameterEvent/value
type AVParameterEventClass ¶
type AVParameterEventClass struct {
// contains filtered or unexported fields
}
func GetAVParameterEventClass ¶
func GetAVParameterEventClass() AVParameterEventClass
GetAVParameterEventClass returns the class object for AVParameterEvent.
func (AVParameterEventClass) Alloc ¶
func (ac AVParameterEventClass) Alloc() AVParameterEvent
Alloc allocates memory for a new instance of the class.
func (AVParameterEventClass) Class ¶
func (ac AVParameterEventClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVSpeechBoundary ¶
type AVSpeechBoundary int
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechBoundary
const ( // AVSpeechBoundaryImmediate: Indicates to pause or stop speech immediately. AVSpeechBoundaryImmediate AVSpeechBoundary = 0 // AVSpeechBoundaryWord: Indicates to pause or stop speech after the synthesizer finishes speaking the current word. AVSpeechBoundaryWord AVSpeechBoundary = 1 )
func (AVSpeechBoundary) String ¶
func (e AVSpeechBoundary) String() string
type AVSpeechSynthesisMarker ¶
type AVSpeechSynthesisMarker struct {
objectivec.Object
}
An object that contains information about the synthesized audio.
Creating a marker ¶
- AVSpeechSynthesisMarker.InitWithMarkerTypeForTextRangeAtByteSampleOffset: Creates a marker with a type and location of the request’s text.
- AVSpeechSynthesisMarker.InitWithWordRangeAtByteSampleOffset: Creates a word marker with a range of the word and offset into the audio buffer.
- AVSpeechSynthesisMarker.InitWithSentenceRangeAtByteSampleOffset: Creates a sentence marker with a range of the sentence and offset into the audio buffer.
- AVSpeechSynthesisMarker.InitWithParagraphRangeAtByteSampleOffset: Creates a paragraph marker with a range of the paragraph and offset into the audio buffer.
- AVSpeechSynthesisMarker.InitWithPhonemeStringAtByteSampleOffset: Creates a phoneme marker with a range of the phoneme and offset into the audio buffer.
- AVSpeechSynthesisMarker.InitWithBookmarkNameAtByteSampleOffset: Creates a bookmark marker with a name and offset into the audio buffer.
Inspecting a marker ¶
- AVSpeechSynthesisMarker.Mark: The type that describes the text.
- AVSpeechSynthesisMarker.SetMark
- AVSpeechSynthesisMarker.BookmarkName: A string that represents the name of a bookmark.
- AVSpeechSynthesisMarker.SetBookmarkName
- AVSpeechSynthesisMarker.Phoneme: A string that represents a distinct sound.
- AVSpeechSynthesisMarker.SetPhoneme
- AVSpeechSynthesisMarker.TextRange: The location and length of the request’s text.
- AVSpeechSynthesisMarker.SetTextRange
- AVSpeechSynthesisMarker.ByteSampleOffset: The byte offset into the audio buffer.
- AVSpeechSynthesisMarker.SetByteSampleOffset
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisMarker
func AVSpeechSynthesisMarkerFromID ¶
func AVSpeechSynthesisMarkerFromID(id objc.ID) AVSpeechSynthesisMarker
AVSpeechSynthesisMarkerFromID constructs a AVSpeechSynthesisMarker from an objc.ID.
An object that contains information about the synthesized audio.
func NewAVSpeechSynthesisMarker ¶
func NewAVSpeechSynthesisMarker() AVSpeechSynthesisMarker
NewAVSpeechSynthesisMarker creates a new AVSpeechSynthesisMarker instance.
func NewSpeechSynthesisMarkerWithBookmarkNameAtByteSampleOffset ¶
func NewSpeechSynthesisMarkerWithBookmarkNameAtByteSampleOffset(mark string, byteSampleOffset int) AVSpeechSynthesisMarker
Creates a bookmark marker with a name and offset into the audio buffer.
mark: The name of the bookmark.
byteSampleOffset: The byte offset into the audio buffer.
func NewSpeechSynthesisMarkerWithMarkerTypeForTextRangeAtByteSampleOffset ¶
func NewSpeechSynthesisMarkerWithMarkerTypeForTextRangeAtByteSampleOffset(type_ AVSpeechSynthesisMarkerMark, range_ foundation.NSRange, byteSampleOffset uint) AVSpeechSynthesisMarker
Creates a marker with a type and location of the request’s text.
type: The type that describes the text.
range: The location and length of the request’s text.
byteSampleOffset: The byte offset into the audio buffer.
func NewSpeechSynthesisMarkerWithParagraphRangeAtByteSampleOffset ¶
func NewSpeechSynthesisMarkerWithParagraphRangeAtByteSampleOffset(range_ foundation.NSRange, byteSampleOffset int) AVSpeechSynthesisMarker
Creates a paragraph marker with a range of the paragraph and offset into the audio buffer.
range: The location and length of the paragraph.
byteSampleOffset: The byte offset into the audio buffer.
func NewSpeechSynthesisMarkerWithPhonemeStringAtByteSampleOffset ¶
func NewSpeechSynthesisMarkerWithPhonemeStringAtByteSampleOffset(phoneme string, byteSampleOffset int) AVSpeechSynthesisMarker
Creates a phoneme marker with a range of the phoneme and offset into the audio buffer.
phoneme: A string that represents a distinct sound.
byteSampleOffset: The byte offset into the audio buffer.
func NewSpeechSynthesisMarkerWithSentenceRangeAtByteSampleOffset ¶
func NewSpeechSynthesisMarkerWithSentenceRangeAtByteSampleOffset(range_ foundation.NSRange, byteSampleOffset int) AVSpeechSynthesisMarker
Creates a sentence marker with a range of the sentence and offset into the audio buffer.
range: The location and length of the word.
byteSampleOffset: The byte offset into the audio buffer.
func NewSpeechSynthesisMarkerWithWordRangeAtByteSampleOffset ¶
func NewSpeechSynthesisMarkerWithWordRangeAtByteSampleOffset(range_ foundation.NSRange, byteSampleOffset int) AVSpeechSynthesisMarker
Creates a word marker with a range of the word and offset into the audio buffer.
range: The location and length of the word.
byteSampleOffset: The byte offset into the audio buffer.
func (AVSpeechSynthesisMarker) Autorelease ¶
func (s AVSpeechSynthesisMarker) Autorelease() AVSpeechSynthesisMarker
Autorelease adds the receiver to the current autorelease pool.
func (AVSpeechSynthesisMarker) BookmarkName ¶
func (s AVSpeechSynthesisMarker) BookmarkName() string
A string that represents the name of a bookmark.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisMarker/bookmarkName
func (AVSpeechSynthesisMarker) ByteSampleOffset ¶
func (s AVSpeechSynthesisMarker) ByteSampleOffset() uint
The byte offset into the audio buffer.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisMarker/byteSampleOffset
func (AVSpeechSynthesisMarker) EncodeWithCoder ¶
func (s AVSpeechSynthesisMarker) EncodeWithCoder(coder foundation.INSCoder)
func (AVSpeechSynthesisMarker) Init ¶
func (s AVSpeechSynthesisMarker) Init() AVSpeechSynthesisMarker
Init initializes the instance.
func (AVSpeechSynthesisMarker) InitWithBookmarkNameAtByteSampleOffset ¶
func (s AVSpeechSynthesisMarker) InitWithBookmarkNameAtByteSampleOffset(mark string, byteSampleOffset int) AVSpeechSynthesisMarker
Creates a bookmark marker with a name and offset into the audio buffer.
mark: The name of the bookmark.
byteSampleOffset: The byte offset into the audio buffer.
func (AVSpeechSynthesisMarker) InitWithMarkerTypeForTextRangeAtByteSampleOffset ¶
func (s AVSpeechSynthesisMarker) InitWithMarkerTypeForTextRangeAtByteSampleOffset(type_ AVSpeechSynthesisMarkerMark, range_ foundation.NSRange, byteSampleOffset uint) AVSpeechSynthesisMarker
Creates a marker with a type and location of the request’s text.
type: The type that describes the text.
range: The location and length of the request’s text.
byteSampleOffset: The byte offset into the audio buffer.
func (AVSpeechSynthesisMarker) InitWithParagraphRangeAtByteSampleOffset ¶
func (s AVSpeechSynthesisMarker) InitWithParagraphRangeAtByteSampleOffset(range_ foundation.NSRange, byteSampleOffset int) AVSpeechSynthesisMarker
Creates a paragraph marker with a range of the paragraph and offset into the audio buffer.
range: The location and length of the paragraph.
byteSampleOffset: The byte offset into the audio buffer.
func (AVSpeechSynthesisMarker) InitWithPhonemeStringAtByteSampleOffset ¶
func (s AVSpeechSynthesisMarker) InitWithPhonemeStringAtByteSampleOffset(phoneme string, byteSampleOffset int) AVSpeechSynthesisMarker
Creates a phoneme marker with a range of the phoneme and offset into the audio buffer.
phoneme: A string that represents a distinct sound.
byteSampleOffset: The byte offset into the audio buffer.
func (AVSpeechSynthesisMarker) InitWithSentenceRangeAtByteSampleOffset ¶
func (s AVSpeechSynthesisMarker) InitWithSentenceRangeAtByteSampleOffset(range_ foundation.NSRange, byteSampleOffset int) AVSpeechSynthesisMarker
Creates a sentence marker with a range of the sentence and offset into the audio buffer.
range: The location and length of the word.
byteSampleOffset: The byte offset into the audio buffer.
func (AVSpeechSynthesisMarker) InitWithWordRangeAtByteSampleOffset ¶
func (s AVSpeechSynthesisMarker) InitWithWordRangeAtByteSampleOffset(range_ foundation.NSRange, byteSampleOffset int) AVSpeechSynthesisMarker
Creates a word marker with a range of the word and offset into the audio buffer.
range: The location and length of the word.
byteSampleOffset: The byte offset into the audio buffer.
func (AVSpeechSynthesisMarker) Mark ¶
func (s AVSpeechSynthesisMarker) Mark() AVSpeechSynthesisMarkerMark
The type that describes the text.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisMarker/mark-swift.property
func (AVSpeechSynthesisMarker) Phoneme ¶
func (s AVSpeechSynthesisMarker) Phoneme() string
A string that represents a distinct sound.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisMarker/phoneme
func (AVSpeechSynthesisMarker) SetBookmarkName ¶
func (s AVSpeechSynthesisMarker) SetBookmarkName(value string)
func (AVSpeechSynthesisMarker) SetByteSampleOffset ¶
func (s AVSpeechSynthesisMarker) SetByteSampleOffset(value uint)
func (AVSpeechSynthesisMarker) SetMark ¶
func (s AVSpeechSynthesisMarker) SetMark(value AVSpeechSynthesisMarkerMark)
func (AVSpeechSynthesisMarker) SetPhoneme ¶
func (s AVSpeechSynthesisMarker) SetPhoneme(value string)
func (AVSpeechSynthesisMarker) SetSpeechSynthesisOutputMetadataBlock ¶
func (s AVSpeechSynthesisMarker) SetSpeechSynthesisOutputMetadataBlock(value AVSpeechSynthesisProviderOutputBlock)
func (AVSpeechSynthesisMarker) SetTextRange ¶
func (s AVSpeechSynthesisMarker) SetTextRange(value foundation.NSRange)
func (AVSpeechSynthesisMarker) SpeechSynthesisOutputMetadataBlock ¶
func (s AVSpeechSynthesisMarker) SpeechSynthesisOutputMetadataBlock() AVSpeechSynthesisProviderOutputBlock
A block that subclasses use to send marker information to the host.
func (AVSpeechSynthesisMarker) TextRange ¶
func (s AVSpeechSynthesisMarker) TextRange() foundation.NSRange
The location and length of the request’s text.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisMarker/textRange
type AVSpeechSynthesisMarkerClass ¶
type AVSpeechSynthesisMarkerClass struct {
// contains filtered or unexported fields
}
func GetAVSpeechSynthesisMarkerClass ¶
func GetAVSpeechSynthesisMarkerClass() AVSpeechSynthesisMarkerClass
GetAVSpeechSynthesisMarkerClass returns the class object for AVSpeechSynthesisMarker.
func (AVSpeechSynthesisMarkerClass) Alloc ¶
func (ac AVSpeechSynthesisMarkerClass) Alloc() AVSpeechSynthesisMarker
Alloc allocates memory for a new instance of the class.
func (AVSpeechSynthesisMarkerClass) Class ¶
func (ac AVSpeechSynthesisMarkerClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVSpeechSynthesisMarkerMark ¶
type AVSpeechSynthesisMarkerMark int
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisMarker/Mark-swift.enum
const ( // AVSpeechSynthesisMarkerMarkBookmark: A Speech Synthesis Markup Language (SSML) mark tag. AVSpeechSynthesisMarkerMarkBookmark AVSpeechSynthesisMarkerMark = 4 // AVSpeechSynthesisMarkerMarkParagraph: A type of text that represents a paragraph. AVSpeechSynthesisMarkerMarkParagraph AVSpeechSynthesisMarkerMark = 3 // AVSpeechSynthesisMarkerMarkPhoneme: A type of text that represents a phoneme. AVSpeechSynthesisMarkerMarkPhoneme AVSpeechSynthesisMarkerMark = 0 // AVSpeechSynthesisMarkerMarkSentence: A type of text that represents a sentence. AVSpeechSynthesisMarkerMarkSentence AVSpeechSynthesisMarkerMark = 2 // AVSpeechSynthesisMarkerMarkWord: A type of text that represents a word. AVSpeechSynthesisMarkerMarkWord AVSpeechSynthesisMarkerMark = 1 )
func (AVSpeechSynthesisMarkerMark) String ¶
func (e AVSpeechSynthesisMarkerMark) String() string
type AVSpeechSynthesisPersonalVoiceAuthorizationStatus ¶
type AVSpeechSynthesisPersonalVoiceAuthorizationStatus int
const ( // AVSpeechSynthesisPersonalVoiceAuthorizationStatusAuthorized: The user granted your app’s request to use personal voices. AVSpeechSynthesisPersonalVoiceAuthorizationStatusAuthorized AVSpeechSynthesisPersonalVoiceAuthorizationStatus = 3 // AVSpeechSynthesisPersonalVoiceAuthorizationStatusDenied: The user denied your app’s request to use personal voices. AVSpeechSynthesisPersonalVoiceAuthorizationStatusDenied AVSpeechSynthesisPersonalVoiceAuthorizationStatus = 1 // AVSpeechSynthesisPersonalVoiceAuthorizationStatusNotDetermined: The app hasn’t requested authorization to use personal voices. AVSpeechSynthesisPersonalVoiceAuthorizationStatusNotDetermined AVSpeechSynthesisPersonalVoiceAuthorizationStatus = 0 // AVSpeechSynthesisPersonalVoiceAuthorizationStatusUnsupported: The device doesn’t support personal voices. AVSpeechSynthesisPersonalVoiceAuthorizationStatusUnsupported AVSpeechSynthesisPersonalVoiceAuthorizationStatus = 2 )
func (AVSpeechSynthesisPersonalVoiceAuthorizationStatus) String ¶
func (e AVSpeechSynthesisPersonalVoiceAuthorizationStatus) String() string
type AVSpeechSynthesisPersonalVoiceAuthorizationStatusHandler ¶
type AVSpeechSynthesisPersonalVoiceAuthorizationStatusHandler = func(AVSpeechSynthesisPersonalVoiceAuthorizationStatus)
AVSpeechSynthesisPersonalVoiceAuthorizationStatusHandler handles A completion handler that the system calls after the user responds to a request to authorize use of personal voices, which receives the authorization status as an argument.
Used by:
- [AVSpeechSynthesizer.RequestPersonalVoiceAuthorizationWithCompletionHandler]
type AVSpeechSynthesisProviderAudioUnit ¶
type AVSpeechSynthesisProviderAudioUnit struct {
objectivec.Object
}
An object that generates speech from text.
Overview ¶
Use a speech synthesizer audio unit to generate audio buffers that contain speech for a given voice and speech markup. The audio unit receives an AVSpeechSynthesisProviderRequest as input, and extracts audio buffers through the render block.
Use AVSpeechSynthesisProviderAudioUnit.SpeechSynthesisOutputMetadataBlock to provide metadata as an array of AVSpeechSynthesisMarker.
The system scans and loads voices for audio unit extensions of this type, and the voices it provides are available for use in AVSpeechSynthesizer and accessibility technologies like VoiceOver and Speak Screen.
Rendering speech ¶
- AVSpeechSynthesisProviderAudioUnit.SynthesizeSpeechRequest: Sets the text to synthesize and the voice to use.
Supplying metadata ¶
- AVSpeechSynthesisProviderAudioUnit.SpeechSynthesisOutputMetadataBlock: A block that subclasses use to send marker information to the host.
- AVSpeechSynthesisProviderAudioUnit.SetSpeechSynthesisOutputMetadataBlock
Getting and setting voices ¶
- AVSpeechSynthesisProviderAudioUnit.SpeechVoices: A list of voices the audio unit provides to the system.
- AVSpeechSynthesisProviderAudioUnit.SetSpeechVoices
Cancelling a request ¶
- AVSpeechSynthesisProviderAudioUnit.CancelSpeechRequest: Informs the audio unit to discard the speech request.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisProviderAudioUnit
func AVSpeechSynthesisProviderAudioUnitFromID ¶
func AVSpeechSynthesisProviderAudioUnitFromID(id objc.ID) AVSpeechSynthesisProviderAudioUnit
AVSpeechSynthesisProviderAudioUnitFromID constructs a AVSpeechSynthesisProviderAudioUnit from an objc.ID.
An object that generates speech from text.
func NewAVSpeechSynthesisProviderAudioUnit ¶
func NewAVSpeechSynthesisProviderAudioUnit() AVSpeechSynthesisProviderAudioUnit
NewAVSpeechSynthesisProviderAudioUnit creates a new AVSpeechSynthesisProviderAudioUnit instance.
func (AVSpeechSynthesisProviderAudioUnit) Autorelease ¶
func (s AVSpeechSynthesisProviderAudioUnit) Autorelease() AVSpeechSynthesisProviderAudioUnit
Autorelease adds the receiver to the current autorelease pool.
func (AVSpeechSynthesisProviderAudioUnit) CancelSpeechRequest ¶
func (s AVSpeechSynthesisProviderAudioUnit) CancelSpeechRequest()
Informs the audio unit to discard the speech request.
func (AVSpeechSynthesisProviderAudioUnit) Init ¶
func (s AVSpeechSynthesisProviderAudioUnit) Init() AVSpeechSynthesisProviderAudioUnit
Init initializes the instance.
func (AVSpeechSynthesisProviderAudioUnit) SetSpeechSynthesisOutputMetadataBlock ¶
func (s AVSpeechSynthesisProviderAudioUnit) SetSpeechSynthesisOutputMetadataBlock(value AVSpeechSynthesisProviderOutputBlock)
func (AVSpeechSynthesisProviderAudioUnit) SetSpeechVoices ¶
func (s AVSpeechSynthesisProviderAudioUnit) SetSpeechVoices(value []AVSpeechSynthesisProviderVoice)
func (AVSpeechSynthesisProviderAudioUnit) SpeechSynthesisOutputMetadataBlock ¶
func (s AVSpeechSynthesisProviderAudioUnit) SpeechSynthesisOutputMetadataBlock() AVSpeechSynthesisProviderOutputBlock
A block that subclasses use to send marker information to the host.
Discussion ¶
A host sets this block to retrieve metadata for a request.
A synthesizer calls this method when it produces data relevant to the audio buffers it’s sending back to a host. In some cases, the system may delay speech output until it delivers these markers. For example, word highlighting depends on marker data from synthesizers to time what word to highlight. The array of markers can reference audio buffers that the system delivers at a later time.
There may be cases where a subclass doesn’t have marker data until it completes extra audio processing. If marker data changes, this block replaces that audio buffer range’s marker data.
func (AVSpeechSynthesisProviderAudioUnit) SpeechVoices ¶
func (s AVSpeechSynthesisProviderAudioUnit) SpeechVoices() []AVSpeechSynthesisProviderVoice
A list of voices the audio unit provides to the system.
Discussion ¶
The list of voices that a user selects through Settings. Speech synthesizer audio unit extensions must provide this list. Override the getter to perform complex fetches that provide a dynamic list of voices.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisProviderAudioUnit/speechVoices
func (AVSpeechSynthesisProviderAudioUnit) SynthesizeSpeechRequest ¶
func (s AVSpeechSynthesisProviderAudioUnit) SynthesizeSpeechRequest(speechRequest IAVSpeechSynthesisProviderRequest)
Sets the text to synthesize and the voice to use.
speechRequest: A speech request to synthesize.
Discussion ¶
When the synthesizer finishes generating audio buffers for the speech request, use AUInternalRenderBlock to report offlineUnitRenderAction_Complete.
type AVSpeechSynthesisProviderAudioUnitClass ¶
type AVSpeechSynthesisProviderAudioUnitClass struct {
// contains filtered or unexported fields
}
func GetAVSpeechSynthesisProviderAudioUnitClass ¶
func GetAVSpeechSynthesisProviderAudioUnitClass() AVSpeechSynthesisProviderAudioUnitClass
GetAVSpeechSynthesisProviderAudioUnitClass returns the class object for AVSpeechSynthesisProviderAudioUnit.
func (AVSpeechSynthesisProviderAudioUnitClass) Alloc ¶
func (ac AVSpeechSynthesisProviderAudioUnitClass) Alloc() AVSpeechSynthesisProviderAudioUnit
Alloc allocates memory for a new instance of the class.
func (AVSpeechSynthesisProviderAudioUnitClass) Class ¶
func (ac AVSpeechSynthesisProviderAudioUnitClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVSpeechSynthesisProviderOutputBlock ¶
type AVSpeechSynthesisProviderOutputBlock = func([]AVSpeechSynthesisMarker, AVSpeechSynthesisProviderRequest)
AVSpeechSynthesisProviderOutputBlock is a type that represents the method for sending marker information to the host.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisProviderOutputBlock
type AVSpeechSynthesisProviderRequest ¶
type AVSpeechSynthesisProviderRequest struct {
objectivec.Object
}
An object that represents the text to synthesize and the voice to use.
Creating a request ¶
- AVSpeechSynthesisProviderRequest.InitWithSSMLRepresentationVoice: Creates a request with a voice and a description.
Inspecting a request ¶
- AVSpeechSynthesisProviderRequest.SsmlRepresentation: The description of the text to synthesize.
- AVSpeechSynthesisProviderRequest.Voice: The voice to use in the speech request.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisProviderRequest
func AVSpeechSynthesisProviderRequestFromID ¶
func AVSpeechSynthesisProviderRequestFromID(id objc.ID) AVSpeechSynthesisProviderRequest
AVSpeechSynthesisProviderRequestFromID constructs a AVSpeechSynthesisProviderRequest from an objc.ID.
An object that represents the text to synthesize and the voice to use.
func NewAVSpeechSynthesisProviderRequest ¶
func NewAVSpeechSynthesisProviderRequest() AVSpeechSynthesisProviderRequest
NewAVSpeechSynthesisProviderRequest creates a new AVSpeechSynthesisProviderRequest instance.
func NewSpeechSynthesisProviderRequestWithSSMLRepresentationVoice ¶
func NewSpeechSynthesisProviderRequestWithSSMLRepresentationVoice(text string, voice IAVSpeechSynthesisProviderVoice) AVSpeechSynthesisProviderRequest
Creates a request with a voice and a description.
text: The description of the text to synthesize.
voice: The voice to use in the speech request.
func (AVSpeechSynthesisProviderRequest) Autorelease ¶
func (s AVSpeechSynthesisProviderRequest) Autorelease() AVSpeechSynthesisProviderRequest
Autorelease adds the receiver to the current autorelease pool.
func (AVSpeechSynthesisProviderRequest) EncodeWithCoder ¶
func (s AVSpeechSynthesisProviderRequest) EncodeWithCoder(coder foundation.INSCoder)
func (AVSpeechSynthesisProviderRequest) Init ¶
func (s AVSpeechSynthesisProviderRequest) Init() AVSpeechSynthesisProviderRequest
Init initializes the instance.
func (AVSpeechSynthesisProviderRequest) InitWithSSMLRepresentationVoice ¶
func (s AVSpeechSynthesisProviderRequest) InitWithSSMLRepresentationVoice(text string, voice IAVSpeechSynthesisProviderVoice) AVSpeechSynthesisProviderRequest
Creates a request with a voice and a description.
text: The description of the text to synthesize.
voice: The voice to use in the speech request.
func (AVSpeechSynthesisProviderRequest) SsmlRepresentation ¶
func (s AVSpeechSynthesisProviderRequest) SsmlRepresentation() string
The description of the text to synthesize.
Discussion ¶
The Speech Synthesis Markup Language describes the speech synthesis attributes for the customization of pitch, rate, intonation, and more.
func (AVSpeechSynthesisProviderRequest) Voice ¶
func (s AVSpeechSynthesisProviderRequest) Voice() IAVSpeechSynthesisProviderVoice
The voice to use in the speech request.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisProviderRequest/voice
type AVSpeechSynthesisProviderRequestClass ¶
type AVSpeechSynthesisProviderRequestClass struct {
// contains filtered or unexported fields
}
func GetAVSpeechSynthesisProviderRequestClass ¶
func GetAVSpeechSynthesisProviderRequestClass() AVSpeechSynthesisProviderRequestClass
GetAVSpeechSynthesisProviderRequestClass returns the class object for AVSpeechSynthesisProviderRequest.
func (AVSpeechSynthesisProviderRequestClass) Alloc ¶
func (ac AVSpeechSynthesisProviderRequestClass) Alloc() AVSpeechSynthesisProviderRequest
Alloc allocates memory for a new instance of the class.
func (AVSpeechSynthesisProviderRequestClass) Class ¶
func (ac AVSpeechSynthesisProviderRequestClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
type AVSpeechSynthesisProviderVoice ¶
type AVSpeechSynthesisProviderVoice struct {
objectivec.Object
}
An object that represents a voice that an audio unit provides to its host.
Overview ¶
This is a voice that an AVSpeechSynthesisProviderAudioUnit provides to the system, distinct from AVSpeechSynthesisVoice. Use AVSpeechSynthesisProviderVoice.SpeechVoices to access the underlying AVSpeechSynthesisVoice in the voice quality [SpeechSynthesisVoiceQualityEnhanced].
Creating a voice ¶
- AVSpeechSynthesisProviderVoice.InitWithNameIdentifierPrimaryLanguagesSupportedLanguages: Creates a voice with a name, an identifier, and language information.
Inspecting a voice ¶
- AVSpeechSynthesisProviderVoice.Age: The age of the voice, in years.
- AVSpeechSynthesisProviderVoice.SetAge
- AVSpeechSynthesisProviderVoice.Gender: The gender of the voice.
- AVSpeechSynthesisProviderVoice.SetGender
- AVSpeechSynthesisProviderVoice.Identifier: The unique identifier for the voice.
- AVSpeechSynthesisProviderVoice.Name: The localized name of the voice.
- AVSpeechSynthesisProviderVoice.PrimaryLanguages: A list of BCP 47 codes that identify the languages the synthesizer uses.
- AVSpeechSynthesisProviderVoice.SupportedLanguages: A list of BCP 47 codes that identify the languages a voice supports.
- AVSpeechSynthesisProviderVoice.Version: The version of the voice.
- AVSpeechSynthesisProviderVoice.SetVersion
- AVSpeechSynthesisProviderVoice.VoiceSize: The size of the voice package on disk, in bytes.
- AVSpeechSynthesisProviderVoice.SetVoiceSize
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisProviderVoice
func AVSpeechSynthesisProviderVoiceFromID ¶
func AVSpeechSynthesisProviderVoiceFromID(id objc.ID) AVSpeechSynthesisProviderVoice
AVSpeechSynthesisProviderVoiceFromID constructs a AVSpeechSynthesisProviderVoice from an objc.ID.
An object that represents a voice that an audio unit provides to its host.
func NewAVSpeechSynthesisProviderVoice ¶
func NewAVSpeechSynthesisProviderVoice() AVSpeechSynthesisProviderVoice
NewAVSpeechSynthesisProviderVoice creates a new AVSpeechSynthesisProviderVoice instance.
func NewSpeechSynthesisProviderVoiceWithNameIdentifierPrimaryLanguagesSupportedLanguages ¶
func NewSpeechSynthesisProviderVoiceWithNameIdentifierPrimaryLanguagesSupportedLanguages(name string, identifier string, primaryLanguages []string, supportedLanguages []string) AVSpeechSynthesisProviderVoice
Creates a voice with a name, an identifier, and language information.
name: The localized name of the voice.
identifier: The unique identifier for the voice.
primaryLanguages: A list of BCP 47 codes that identify the primary languages.
supportedLanguages: A list of BCP 47 codes that identify the languages the voice supports.
func (AVSpeechSynthesisProviderVoice) Age ¶
func (s AVSpeechSynthesisProviderVoice) Age() int
The age of the voice, in years.
Discussion ¶
The system treats this value as a personality trait and defaults to `0`.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisProviderVoice/age
func (AVSpeechSynthesisProviderVoice) Autorelease ¶
func (s AVSpeechSynthesisProviderVoice) Autorelease() AVSpeechSynthesisProviderVoice
Autorelease adds the receiver to the current autorelease pool.
func (AVSpeechSynthesisProviderVoice) EncodeWithCoder ¶
func (s AVSpeechSynthesisProviderVoice) EncodeWithCoder(coder foundation.INSCoder)
func (AVSpeechSynthesisProviderVoice) Gender ¶
func (s AVSpeechSynthesisProviderVoice) Gender() AVSpeechSynthesisVoiceGender
The gender of the voice.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisProviderVoice/gender
func (AVSpeechSynthesisProviderVoice) Identifier ¶
func (s AVSpeechSynthesisProviderVoice) Identifier() string
The unique identifier for the voice.
Discussion ¶
Use reverse domain notation to format the identifier. The behavior is undefined unless all voices within an extension have a unique identifier.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisProviderVoice/identifier
func (AVSpeechSynthesisProviderVoice) Init ¶
func (s AVSpeechSynthesisProviderVoice) Init() AVSpeechSynthesisProviderVoice
Init initializes the instance.
func (AVSpeechSynthesisProviderVoice) InitWithNameIdentifierPrimaryLanguagesSupportedLanguages ¶
func (s AVSpeechSynthesisProviderVoice) InitWithNameIdentifierPrimaryLanguagesSupportedLanguages(name string, identifier string, primaryLanguages []string, supportedLanguages []string) AVSpeechSynthesisProviderVoice
Creates a voice with a name, an identifier, and language information.
name: The localized name of the voice.
identifier: The unique identifier for the voice.
primaryLanguages: A list of BCP 47 codes that identify the primary languages.
supportedLanguages: A list of BCP 47 codes that identify the languages the voice supports.
func (AVSpeechSynthesisProviderVoice) Name ¶
func (s AVSpeechSynthesisProviderVoice) Name() string
The localized name of the voice.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisProviderVoice/name
func (AVSpeechSynthesisProviderVoice) PrimaryLanguages ¶
func (s AVSpeechSynthesisProviderVoice) PrimaryLanguages() []string
A list of BCP 47 codes that identify the languages the synthesizer uses.
Discussion ¶
These languages are what a voice primarily supports. For example, if the primary language is `zh-CN —` with no additional [SupportedLanguages] — the system may switch voices to speak a phrase that contains other languages. Changing voices depends on user preferences and what accessibility feature is using the voice.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisProviderVoice/primaryLanguages
func (AVSpeechSynthesisProviderVoice) SetAge ¶
func (s AVSpeechSynthesisProviderVoice) SetAge(value int)
func (AVSpeechSynthesisProviderVoice) SetGender ¶
func (s AVSpeechSynthesisProviderVoice) SetGender(value AVSpeechSynthesisVoiceGender)
func (AVSpeechSynthesisProviderVoice) SetSpeechVoices ¶
func (s AVSpeechSynthesisProviderVoice) SetSpeechVoices(value IAVSpeechSynthesisProviderVoice)
func (AVSpeechSynthesisProviderVoice) SetVersion ¶
func (s AVSpeechSynthesisProviderVoice) SetVersion(value string)
func (AVSpeechSynthesisProviderVoice) SetVoiceSize ¶
func (s AVSpeechSynthesisProviderVoice) SetVoiceSize(value int64)
func (AVSpeechSynthesisProviderVoice) SpeechVoices ¶
func (s AVSpeechSynthesisProviderVoice) SpeechVoices() IAVSpeechSynthesisProviderVoice
A list of voices the audio unit provides to the system.
See: https://developer.apple.com/documentation/avfaudio/avspeechsynthesisprovideraudiounit/speechvoices
func (AVSpeechSynthesisProviderVoice) SupportedLanguages ¶
func (s AVSpeechSynthesisProviderVoice) SupportedLanguages() []string
A list of BCP 47 codes that identify the languages a voice supports.
Discussion ¶
These languages are what a voice supports — when given a multi-language phrase — without the need to switch voice. For example, if the primary language is `zh-CN`, and this value contains `zh-CN` and `en-US`, a synthesizer that receives a phrase with both languages would speak the entire phrase.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisProviderVoice/supportedLanguages
func (AVSpeechSynthesisProviderVoice) Version ¶
func (s AVSpeechSynthesisProviderVoice) Version() string
The version of the voice.
Discussion ¶
This value is for your own tracking and doesn’t impact the behavior of the system.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisProviderVoice/version
func (AVSpeechSynthesisProviderVoice) VoiceSize ¶
func (s AVSpeechSynthesisProviderVoice) VoiceSize() int64
The size of the voice package on disk, in bytes.
Discussion ¶
This value defaults to `0`.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisProviderVoice/voiceSize
type AVSpeechSynthesisProviderVoiceClass ¶
type AVSpeechSynthesisProviderVoiceClass struct {
// contains filtered or unexported fields
}
func GetAVSpeechSynthesisProviderVoiceClass ¶
func GetAVSpeechSynthesisProviderVoiceClass() AVSpeechSynthesisProviderVoiceClass
GetAVSpeechSynthesisProviderVoiceClass returns the class object for AVSpeechSynthesisProviderVoice.
func (AVSpeechSynthesisProviderVoiceClass) Alloc ¶
func (ac AVSpeechSynthesisProviderVoiceClass) Alloc() AVSpeechSynthesisProviderVoice
Alloc allocates memory for a new instance of the class.
func (AVSpeechSynthesisProviderVoiceClass) Class ¶
func (ac AVSpeechSynthesisProviderVoiceClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
func (AVSpeechSynthesisProviderVoiceClass) UpdateSpeechVoices ¶
func (_AVSpeechSynthesisProviderVoiceClass AVSpeechSynthesisProviderVoiceClass) UpdateSpeechVoices()
Updates the voices your app provides to the system.
Discussion ¶
Use this method to inform the system when you add or remove voices.
type AVSpeechSynthesisVoice ¶
type AVSpeechSynthesisVoice struct {
objectivec.Object
}
A distinct voice for use in speech synthesis.
Overview ¶
The primary factors that distinguish a voice in speech synthesis are language, locale, and quality. Create an instance of AVSpeechSynthesisVoice to select a voice that’s appropriate for the text and the language, and set it as the value of the AVSpeechSynthesisVoice.Voice property on an AVSpeechUtterance instance. The voice may optionally reflect a local variant of the language, such as Australian or South African English. For a complete list of supported languages, see Languages Supported by VoiceOver.
Obtaining voices ¶
- AVSpeechSynthesisVoice.AVSpeechSynthesisVoiceIdentifierAlex: The voice that the system identifies as Alex.
Inspecting voices ¶
- AVSpeechSynthesisVoice.Identifier: The unique identifier of a voice.
- AVSpeechSynthesisVoice.Name: The name of a voice.
- AVSpeechSynthesisVoice.Quality: The speech quality of a voice.
- AVSpeechSynthesisVoice.Gender: The gender for a voice.
- AVSpeechSynthesisVoice.VoiceTraits: The traits of a voice.
- AVSpeechSynthesisVoice.AudioFileSettings: A dictionary that contains audio file settings.
Working with language codes ¶
- AVSpeechSynthesisVoice.Language: A BCP 47 code that contains the voice’s language and locale.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisVoice
func AVSpeechSynthesisVoiceFromID ¶
func AVSpeechSynthesisVoiceFromID(id objc.ID) AVSpeechSynthesisVoice
AVSpeechSynthesisVoiceFromID constructs a AVSpeechSynthesisVoice from an objc.ID.
A distinct voice for use in speech synthesis.
func NewAVSpeechSynthesisVoice ¶
func NewAVSpeechSynthesisVoice() AVSpeechSynthesisVoice
NewAVSpeechSynthesisVoice creates a new AVSpeechSynthesisVoice instance.
func NewSpeechSynthesisVoiceWithIdentifier ¶
func NewSpeechSynthesisVoiceWithIdentifier(identifier string) AVSpeechSynthesisVoice
Retrieves a voice for the identifier you specify.
identifier: The unique identifier for a voice.
Return Value ¶
A voice for the specified identifier if the identifier is valid and the voice is available on the device; otherwise, `nil`.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisVoice/init(identifier:)
func NewSpeechSynthesisVoiceWithLanguage ¶
func NewSpeechSynthesisVoiceWithLanguage(languageCode string) AVSpeechSynthesisVoice
Retrieves a voice for the BCP 47 code language code you specify.
Return Value ¶
A voice for the specified language and locale code if the code is valid; otherwise, `nil`.
Discussion ¶
- languageCode: A BCP 47 code that identifies the language and locale for a voice.
Discussion ¶
Pass `nil` for `languageCode` to receive the default voice for the system’s language and region.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisVoice/init(language:)
func (AVSpeechSynthesisVoice) AVSpeechSynthesisVoiceIdentifierAlex ¶
func (s AVSpeechSynthesisVoice) AVSpeechSynthesisVoiceIdentifierAlex() string
The voice that the system identifies as Alex.
See: https://developer.apple.com/documentation/avfaudio/avspeechsynthesisvoiceidentifieralex
func (AVSpeechSynthesisVoice) AudioFileSettings ¶
func (s AVSpeechSynthesisVoice) AudioFileSettings() foundation.INSDictionary
A dictionary that contains audio file settings.
Discussion ¶
If you want to generate speech and save it as an audio file to share or play later, use this dictionary to create an AVAudioFile instance and pass it as the `settings` parameter.
You can determine the AVAudioCommonFormat and interleaved properties of a voice from this dictionary. The format of this dictionary matches the data that AVSpeechSynthesizerBufferCallback provides for the same voice.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisVoice/audioFileSettings
func (AVSpeechSynthesisVoice) Autorelease ¶
func (s AVSpeechSynthesisVoice) Autorelease() AVSpeechSynthesisVoice
Autorelease adds the receiver to the current autorelease pool.
func (AVSpeechSynthesisVoice) EncodeWithCoder ¶
func (s AVSpeechSynthesisVoice) EncodeWithCoder(coder foundation.INSCoder)
func (AVSpeechSynthesisVoice) Gender ¶
func (s AVSpeechSynthesisVoice) Gender() AVSpeechSynthesisVoiceGender
The gender for a voice.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisVoice/gender
func (AVSpeechSynthesisVoice) Identifier ¶
func (s AVSpeechSynthesisVoice) Identifier() string
The unique identifier of a voice.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisVoice/identifier
func (AVSpeechSynthesisVoice) Init ¶
func (s AVSpeechSynthesisVoice) Init() AVSpeechSynthesisVoice
Init initializes the instance.
func (AVSpeechSynthesisVoice) Language ¶
func (s AVSpeechSynthesisVoice) Language() string
A BCP 47 code that contains the voice’s language and locale.
Discussion ¶
The language of a voice controls the conversion of text to spoken phonemes. For best results, ensure that the language of an utterance’s text matches the voice for the utterance. The locale of a voice reflects regional variations in pronunciation or accent. For example, a voice with a language code of `en-US` speaks English text with a North American accent, and a language code of `en-AU` speaks English text with an Australian accent.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisVoice/language
func (AVSpeechSynthesisVoice) Name ¶
func (s AVSpeechSynthesisVoice) Name() string
The name of a voice.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisVoice/name
func (AVSpeechSynthesisVoice) Quality ¶
func (s AVSpeechSynthesisVoice) Quality() AVSpeechSynthesisVoiceQuality
The speech quality of a voice.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisVoice/quality
func (AVSpeechSynthesisVoice) SetVoice ¶
func (s AVSpeechSynthesisVoice) SetVoice(value IAVSpeechSynthesisVoice)
func (AVSpeechSynthesisVoice) Voice ¶
func (s AVSpeechSynthesisVoice) Voice() IAVSpeechSynthesisVoice
The voice the speech synthesizer uses when speaking the utterance.
See: https://developer.apple.com/documentation/avfaudio/avspeechutterance/voice
func (AVSpeechSynthesisVoice) VoiceTraits ¶
func (s AVSpeechSynthesisVoice) VoiceTraits() AVSpeechSynthesisVoiceTraits
The traits of a voice.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisVoice/voiceTraits
type AVSpeechSynthesisVoiceClass ¶
type AVSpeechSynthesisVoiceClass struct {
// contains filtered or unexported fields
}
func GetAVSpeechSynthesisVoiceClass ¶
func GetAVSpeechSynthesisVoiceClass() AVSpeechSynthesisVoiceClass
GetAVSpeechSynthesisVoiceClass returns the class object for AVSpeechSynthesisVoice.
func (AVSpeechSynthesisVoiceClass) Alloc ¶
func (ac AVSpeechSynthesisVoiceClass) Alloc() AVSpeechSynthesisVoice
Alloc allocates memory for a new instance of the class.
func (AVSpeechSynthesisVoiceClass) Class ¶
func (ac AVSpeechSynthesisVoiceClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
func (AVSpeechSynthesisVoiceClass) CurrentLanguageCode ¶
func (_AVSpeechSynthesisVoiceClass AVSpeechSynthesisVoiceClass) CurrentLanguageCode() string
Returns the language and locale code for the user’s current locale.
Return Value ¶
A string that contains the BCP 47 language and locale code for the user’s current locale.
Discussion ¶
This code reflects the user’s language and region preferences in the Settings app.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisVoice/currentLanguageCode()
func (AVSpeechSynthesisVoiceClass) SpeechVoices ¶
func (_AVSpeechSynthesisVoiceClass AVSpeechSynthesisVoiceClass) SpeechVoices() []AVSpeechSynthesisVoice
Retrieves all available voices on the device.
Return Value ¶
An array of voices.
Discussion ¶
Use the [Language] property to identify each voice by its language and locale.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisVoice/speechVoices()
type AVSpeechSynthesisVoiceGender ¶
type AVSpeechSynthesisVoiceGender int
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisVoiceGender
const ( // AVSpeechSynthesisVoiceGenderFemale: The female voice option. AVSpeechSynthesisVoiceGenderFemale AVSpeechSynthesisVoiceGender = 2 // AVSpeechSynthesisVoiceGenderMale: The male voice option. AVSpeechSynthesisVoiceGenderMale AVSpeechSynthesisVoiceGender = 1 // AVSpeechSynthesisVoiceGenderUnspecified: The nonspecific gender option. AVSpeechSynthesisVoiceGenderUnspecified AVSpeechSynthesisVoiceGender = 0 )
func (AVSpeechSynthesisVoiceGender) String ¶
func (e AVSpeechSynthesisVoiceGender) String() string
type AVSpeechSynthesisVoiceQuality ¶
type AVSpeechSynthesisVoiceQuality int
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisVoiceQuality
const ( // AVSpeechSynthesisVoiceQualityDefault: A basic quality voice that’s available on the device by default. AVSpeechSynthesisVoiceQualityDefault AVSpeechSynthesisVoiceQuality = 1 // AVSpeechSynthesisVoiceQualityEnhanced: An enhanced quality voice that you must download to use. AVSpeechSynthesisVoiceQualityEnhanced AVSpeechSynthesisVoiceQuality = 2 // AVSpeechSynthesisVoiceQualityPremium: A premium quality voice that you must download to use. AVSpeechSynthesisVoiceQualityPremium AVSpeechSynthesisVoiceQuality = 3 )
func (AVSpeechSynthesisVoiceQuality) String ¶
func (e AVSpeechSynthesisVoiceQuality) String() string
type AVSpeechSynthesisVoiceTraits ¶
type AVSpeechSynthesisVoiceTraits uint
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisVoice/Traits
const ( // AVSpeechSynthesisVoiceTraitIsNoveltyVoice: The trait that indicates a voice is a novelty voice. AVSpeechSynthesisVoiceTraitIsNoveltyVoice AVSpeechSynthesisVoiceTraits = 1 // AVSpeechSynthesisVoiceTraitIsPersonalVoice: The trait that indicates a voice is a personal voice. AVSpeechSynthesisVoiceTraitIsPersonalVoice AVSpeechSynthesisVoiceTraits = 2 // AVSpeechSynthesisVoiceTraitNone: The trait that indicates a voice is a regular voice. AVSpeechSynthesisVoiceTraitNone AVSpeechSynthesisVoiceTraits = 0 )
func (AVSpeechSynthesisVoiceTraits) String ¶
func (e AVSpeechSynthesisVoiceTraits) String() string
type AVSpeechSynthesizer ¶
type AVSpeechSynthesizer struct {
objectivec.Object
}
An object that produces synthesized speech from text utterances and enables monitoring or controlling of ongoing speech.
Overview ¶
To speak some text, create an AVSpeechUtterance instance that contains the text and pass it to AVSpeechSynthesizer.SpeakUtterance on a speech synthesizer instance. You can optionally also retrieve an AVSpeechSynthesisVoice and set it on the utterance’s AVSpeechSynthesizer.Voice property to have the speech synthesizer use that voice when speaking the utterance’s text.
The speech synthesizer maintains a queue of utterances that it speaks. If the synthesizer isn’t speaking, calling AVSpeechSynthesizer.SpeakUtterance begins speaking that utterance either immediately or after pausing for its AVSpeechSynthesizer.PreUtteranceDelay, if necessary. If the synthesizer is speaking, the synthesizer adds utterances to a queue and speaks them in the order it receives them.
After speech begins, you can use the synthesizer object to pause or stop speech. After pausing, you can resume the speech from its paused point or stop the speech entirely and remove all remaining utterances in the queue.
You can monitor the speech synthesizer by examining its AVSpeechSynthesizer.Speaking and AVSpeechSynthesizer.Paused properties, or by setting a delegate that conforms to AVSpeechSynthesizerDelegate. The delegate receives significant events as they occur during speech synthesis.
An AVSpeechSynthesizer also controls the route where the speech plays. For more information, see Directing speech output.
Controlling speech ¶
- AVSpeechSynthesizer.SpeakUtterance: Adds the utterance you specify to the speech synthesizer’s queue.
- AVSpeechSynthesizer.ContinueSpeaking: Resumes speech from its paused point.
- AVSpeechSynthesizer.PauseSpeakingAtBoundary: Pauses speech at the boundary you specify.
- AVSpeechSynthesizer.StopSpeakingAtBoundary: Stops speech at the boundary you specify.
Inspecting a speech synthesizer ¶
- AVSpeechSynthesizer.Speaking: A Boolean value that indicates whether the speech synthesizer is speaking or is in a paused state and has utterances to speak.
- AVSpeechSynthesizer.Paused: A Boolean value that indicates whether a speech synthesizer is in a paused state.
Managing the delegate ¶
- AVSpeechSynthesizer.Delegate: The delegate object for the speech synthesizer.
- AVSpeechSynthesizer.SetDelegate
Directing speech output ¶
- AVSpeechSynthesizer.WriteUtteranceToBufferCallback: Generates speech for the utterance and invokes the callback with the audio buffer.
- AVSpeechSynthesizer.WriteUtteranceToBufferCallbackToMarkerCallback: Generates audio buffers and associated metadata for storage or further speech synthesis processing.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesizer
func AVSpeechSynthesizerFromID ¶
func AVSpeechSynthesizerFromID(id objc.ID) AVSpeechSynthesizer
AVSpeechSynthesizerFromID constructs a AVSpeechSynthesizer from an objc.ID.
An object that produces synthesized speech from text utterances and enables monitoring or controlling of ongoing speech.
func NewAVSpeechSynthesizer ¶
func NewAVSpeechSynthesizer() AVSpeechSynthesizer
NewAVSpeechSynthesizer creates a new AVSpeechSynthesizer instance.
func (AVSpeechSynthesizer) Autorelease ¶
func (s AVSpeechSynthesizer) Autorelease() AVSpeechSynthesizer
Autorelease adds the receiver to the current autorelease pool.
func (AVSpeechSynthesizer) ContinueSpeaking ¶
func (s AVSpeechSynthesizer) ContinueSpeaking() bool
Resumes speech from its paused point.
Return Value ¶
true if speech resumes; otherwise, false.
Discussion ¶
This method only has an effect if the speech synthesizer is in a paused state.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesizer/continueSpeaking()
func (AVSpeechSynthesizer) Delegate ¶
func (s AVSpeechSynthesizer) Delegate() AVSpeechSynthesizerDelegate
The delegate object for the speech synthesizer.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesizer/delegate
func (AVSpeechSynthesizer) Init ¶
func (s AVSpeechSynthesizer) Init() AVSpeechSynthesizer
Init initializes the instance.
func (AVSpeechSynthesizer) PauseSpeakingAtBoundary ¶
func (s AVSpeechSynthesizer) PauseSpeakingAtBoundary(boundary AVSpeechBoundary) bool
Pauses speech at the boundary you specify.
boundary: An enumeration that describes whether to pause speech immediately or only after the synthesizer finishes speaking the current word.
Return Value ¶
true if speech pauses; otherwise, false.
Discussion ¶
The `boundary` parameter also affects how the speech synthesizer resumes speaking text after a pause and call to [ContinueSpeaking]. If the boundary is [SpeechBoundaryImmediate], speech resumes from the exact point where it pauses, even if that point occurs in the middle of speaking a word. If the boundary is [SpeechBoundaryWord], speech resumes from the word that follows the last spoken word where it pauses.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesizer/pauseSpeaking(at:)
func (AVSpeechSynthesizer) Paused ¶
func (s AVSpeechSynthesizer) Paused() bool
A Boolean value that indicates whether a speech synthesizer is in a paused state.
Discussion ¶
If `true`, the speech synthesizer is in a paused state after beginning to speak an utterance; otherwise, `false`.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesizer/isPaused
func (AVSpeechSynthesizer) PreUtteranceDelay ¶
func (s AVSpeechSynthesizer) PreUtteranceDelay() float64
The amount of time the speech synthesizer pauses before speaking the utterance.
See: https://developer.apple.com/documentation/avfaudio/avspeechutterance/preutterancedelay
func (AVSpeechSynthesizer) SetDelegate ¶
func (s AVSpeechSynthesizer) SetDelegate(value AVSpeechSynthesizerDelegate)
func (AVSpeechSynthesizer) SetPreUtteranceDelay ¶
func (s AVSpeechSynthesizer) SetPreUtteranceDelay(value float64)
func (AVSpeechSynthesizer) SetVoice ¶
func (s AVSpeechSynthesizer) SetVoice(value IAVSpeechSynthesisVoice)
func (AVSpeechSynthesizer) SpeakUtterance ¶
func (s AVSpeechSynthesizer) SpeakUtterance(utterance IAVSpeechUtterance)
Adds the utterance you specify to the speech synthesizer’s queue.
utterance: An AVSpeechUtterance instance that contains text to speak.
Discussion ¶
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesizer/speak(_:)
func (AVSpeechSynthesizer) Speaking ¶
func (s AVSpeechSynthesizer) Speaking() bool
A Boolean value that indicates whether the speech synthesizer is speaking or is in a paused state and has utterances to speak.
Discussion ¶
If `true`, the synthesizer is speaking or is in a paused state with utterances in its queue. If `false`, the synthesizer isn’t speaking and it doesn’t have any utterances in its queue.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesizer/isSpeaking
func (AVSpeechSynthesizer) StopSpeakingAtBoundary ¶
func (s AVSpeechSynthesizer) StopSpeakingAtBoundary(boundary AVSpeechBoundary) bool
Stops speech at the boundary you specify.
boundary: An enumeration that describes whether to stop speech immediately or only after the synthesizer finishes speaking the current word.
Return Value ¶
true if speech stops; otherwise, false.
Discussion ¶
Unlike pausing a speech synthesizer, which can resume after a pause, stopping the synthesizer immediately cancels speech and removes all unspoken utterances from the synthesizer’s queue.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesizer/stopSpeaking(at:)
func (AVSpeechSynthesizer) Voice ¶
func (s AVSpeechSynthesizer) Voice() IAVSpeechSynthesisVoice
The voice the speech synthesizer uses when speaking the utterance.
See: https://developer.apple.com/documentation/avfaudio/avspeechutterance/voice
func (AVSpeechSynthesizer) WriteUtteranceToBufferCallback ¶
func (s AVSpeechSynthesizer) WriteUtteranceToBufferCallback(utterance IAVSpeechUtterance, bufferCallback AVSpeechSynthesizerBufferCallback)
Generates speech for the utterance and invokes the callback with the audio buffer.
utterance: The utterance for synthesizing speech.
bufferCallback: The system calls this closure with the generated audio buffer.
Discussion ¶
Call this method to receive audio buffers to store or further process synthesized speech.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesizer/write(_:toBufferCallback:)
func (AVSpeechSynthesizer) WriteUtteranceToBufferCallbackToMarkerCallback ¶
func (s AVSpeechSynthesizer) WriteUtteranceToBufferCallbackToMarkerCallback(utterance IAVSpeechUtterance, bufferCallback AVSpeechSynthesizerBufferCallback, markerCallback AVSpeechSynthesizerMarkerCallback)
Generates audio buffers and associated metadata for storage or further speech synthesis processing.
utterance: A utterance for a synthesizer to speak.
bufferCallback: A callback that the system invokes with the synthesized audio data.
markerCallback: A callback that the system invokes with marker information.
type AVSpeechSynthesizerBufferCallback ¶
type AVSpeechSynthesizerBufferCallback = func(AVAudioBuffer)
AVSpeechSynthesizerBufferCallback is a type that defines a callback that receives a buffer of generated speech.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesizer/BufferCallback
type AVSpeechSynthesizerClass ¶
type AVSpeechSynthesizerClass struct {
// contains filtered or unexported fields
}
func GetAVSpeechSynthesizerClass ¶
func GetAVSpeechSynthesizerClass() AVSpeechSynthesizerClass
GetAVSpeechSynthesizerClass returns the class object for AVSpeechSynthesizer.
func (AVSpeechSynthesizerClass) Alloc ¶
func (ac AVSpeechSynthesizerClass) Alloc() AVSpeechSynthesizer
Alloc allocates memory for a new instance of the class.
func (AVSpeechSynthesizerClass) AvailableVoicesDidChangeNotification ¶
func (_AVSpeechSynthesizerClass AVSpeechSynthesizerClass) AvailableVoicesDidChangeNotification() foundation.NSString
A notification that indicates a change in available voices for speech synthesis.
func (AVSpeechSynthesizerClass) Class ¶
func (ac AVSpeechSynthesizerClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
func (AVSpeechSynthesizerClass) PersonalVoiceAuthorizationStatus ¶
func (_AVSpeechSynthesizerClass AVSpeechSynthesizerClass) PersonalVoiceAuthorizationStatus() AVSpeechSynthesisPersonalVoiceAuthorizationStatus
Your app’s authorization to use personal voices.
Discussion ¶
The user can grant or deny your app’s request to use personal voices when they’re initially prompted, and change the authorization in the Settings app. Additionally, the framework denies the request if the device doesn’t support using personal voices.
func (AVSpeechSynthesizerClass) RequestPersonalVoiceAuthorization ¶
func (sc AVSpeechSynthesizerClass) RequestPersonalVoiceAuthorization(ctx context.Context) (AVSpeechSynthesisPersonalVoiceAuthorizationStatus, error)
RequestPersonalVoiceAuthorization is a synchronous wrapper around [AVSpeechSynthesizer.RequestPersonalVoiceAuthorizationWithCompletionHandler]. It blocks until the completion handler fires or the context is cancelled.
func (AVSpeechSynthesizerClass) RequestPersonalVoiceAuthorizationWithCompletionHandler ¶
func (_AVSpeechSynthesizerClass AVSpeechSynthesizerClass) RequestPersonalVoiceAuthorizationWithCompletionHandler(handler AVSpeechSynthesisPersonalVoiceAuthorizationStatusHandler)
Prompts the user to authorize your app to use personal voices.
handler: A completion handler that the system calls after the user responds to a request to authorize use of personal voices, which receives the authorization status as an argument.
Discussion ¶
type AVSpeechSynthesizerDelegate ¶
type AVSpeechSynthesizerDelegate interface {
objectivec.IObject
}
A delegate protocol that contains optional methods you can implement to respond to events that occur during speech synthesis.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesizerDelegate
type AVSpeechSynthesizerDelegateConfig ¶
type AVSpeechSynthesizerDelegateConfig struct {
// Responding to speech synthesis events
// SpeechSynthesizerWillSpeakRangeOfSpeechStringUtterance — Tells the delegate when the synthesizer is about to speak a portion of an utterance’s text.
SpeechSynthesizerWillSpeakRangeOfSpeechStringUtterance func(synthesizer AVSpeechSynthesizer, characterRange foundation.NSRange, utterance AVSpeechUtterance)
// Other Methods
// SpeechSynthesizerDidStartSpeechUtterance — Tells the delegate when the synthesizer begins speaking an utterance.
SpeechSynthesizerDidStartSpeechUtterance func(synthesizer AVSpeechSynthesizer, utterance AVSpeechUtterance)
// SpeechSynthesizerWillSpeakMarkerUtterance — Tells the delegate when the synthesizer is about to speak a marker of an utterance’s text.
SpeechSynthesizerWillSpeakMarkerUtterance func(synthesizer AVSpeechSynthesizer, marker AVSpeechSynthesisMarker, utterance AVSpeechUtterance)
// SpeechSynthesizerDidPauseSpeechUtterance — Tells the delegate when the synthesizer pauses while speaking an utterance.
SpeechSynthesizerDidPauseSpeechUtterance func(synthesizer AVSpeechSynthesizer, utterance AVSpeechUtterance)
// SpeechSynthesizerDidContinueSpeechUtterance — Tells the delegate when the synthesizer resumes speaking an utterance after pausing.
SpeechSynthesizerDidContinueSpeechUtterance func(synthesizer AVSpeechSynthesizer, utterance AVSpeechUtterance)
// SpeechSynthesizerDidFinishSpeechUtterance — Tells the delegate when the synthesizer finishes speaking an utterance.
SpeechSynthesizerDidFinishSpeechUtterance func(synthesizer AVSpeechSynthesizer, utterance AVSpeechUtterance)
// SpeechSynthesizerDidCancelSpeechUtterance — Tells the delegate when the synthesizer cancels speaking an utterance.
SpeechSynthesizerDidCancelSpeechUtterance func(synthesizer AVSpeechSynthesizer, utterance AVSpeechUtterance)
}
AVSpeechSynthesizerDelegateConfig holds optional typed callbacks for AVSpeechSynthesizerDelegate methods. Set non-nil fields to register the corresponding Objective-C delegate method. Methods with nil callbacks are not registered, so [NSObject.RespondsToSelector] returns false for them — matching the Objective-C delegate pattern exactly.
See Apple Documentation for protocol details.
type AVSpeechSynthesizerDelegateObject ¶
type AVSpeechSynthesizerDelegateObject struct {
objectivec.Object
}
AVSpeechSynthesizerDelegateObject wraps an existing Objective-C object that conforms to the AVSpeechSynthesizerDelegate protocol.
func AVSpeechSynthesizerDelegateObjectFromID ¶
func AVSpeechSynthesizerDelegateObjectFromID(id objc.ID) AVSpeechSynthesizerDelegateObject
AVSpeechSynthesizerDelegateObjectFromID constructs a AVSpeechSynthesizerDelegateObject from an objc.ID. The object is determined to conform to the protocol at runtime.
func NewAVSpeechSynthesizerDelegate ¶
func NewAVSpeechSynthesizerDelegate(config AVSpeechSynthesizerDelegateConfig) AVSpeechSynthesizerDelegateObject
NewAVSpeechSynthesizerDelegate creates an Objective-C object implementing the AVSpeechSynthesizerDelegate protocol.
Each call registers a unique Objective-C class containing only the methods set in config. This means [NSObject.RespondsToSelector] works correctly for optional delegate methods — only non-nil callbacks are registered.
The returned AVSpeechSynthesizerDelegateObject satisfies the AVSpeechSynthesizerDelegate interface and can be passed directly to SetDelegate and similar methods.
See Apple Documentation for protocol details.
func (AVSpeechSynthesizerDelegateObject) BaseObject ¶
func (o AVSpeechSynthesizerDelegateObject) BaseObject() objectivec.Object
func (AVSpeechSynthesizerDelegateObject) SpeechSynthesizerDidCancelSpeechUtterance ¶
func (o AVSpeechSynthesizerDelegateObject) SpeechSynthesizerDidCancelSpeechUtterance(synthesizer IAVSpeechSynthesizer, utterance IAVSpeechUtterance)
Tells the delegate when the synthesizer cancels speaking an utterance.
synthesizer: The speech synthesizer that cancels speaking the utterance.
utterance: The utterance that the speech synthesizer cancels speaking.
Discussion ¶
The system only calls this method if a speech synthesizer is speaking an utterance and the system calls its [StopSpeakingAtBoundary] method. The system doesn’t call this method if the synthesizer is in a delay between utterances when speech stops, and it doesn’t call it for unspoken utterances.
func (AVSpeechSynthesizerDelegateObject) SpeechSynthesizerDidContinueSpeechUtterance ¶
func (o AVSpeechSynthesizerDelegateObject) SpeechSynthesizerDidContinueSpeechUtterance(synthesizer IAVSpeechSynthesizer, utterance IAVSpeechUtterance)
Tells the delegate when the synthesizer resumes speaking an utterance after pausing.
synthesizer: The speech synthesizer that resumes speaking the utterance.
utterance: The utterance that the speech synthesizer resumes speaking.
Discussion ¶
The system only calls this method if a speech synthesizer pauses speaking and the system calls its [PauseSpeakingAtBoundary] method. The system doesn’t call this method if the synthesizer pauses while in a delay between utterances.
func (AVSpeechSynthesizerDelegateObject) SpeechSynthesizerDidFinishSpeechUtterance ¶
func (o AVSpeechSynthesizerDelegateObject) SpeechSynthesizerDidFinishSpeechUtterance(synthesizer IAVSpeechSynthesizer, utterance IAVSpeechUtterance)
Tells the delegate when the synthesizer finishes speaking an utterance.
synthesizer: The speech synthesizer that finishes speaking the utterance.
utterance: The utterance that the speech synthesizer finishes speaking.
Discussion ¶
The system ignores the final utterance’s [PostUtteranceDelay] and calls this method immediately when speech ends.
func (AVSpeechSynthesizerDelegateObject) SpeechSynthesizerDidPauseSpeechUtterance ¶
func (o AVSpeechSynthesizerDelegateObject) SpeechSynthesizerDidPauseSpeechUtterance(synthesizer IAVSpeechSynthesizer, utterance IAVSpeechUtterance)
Tells the delegate when the synthesizer pauses while speaking an utterance.
synthesizer: The speech synthesizer that pauses speaking the utterance.
utterance: The utterance that the speech synthesizer pauses speaking.
Discussion ¶
The system only calls this method if a speech synthesizer is speaking an utterance and the system calls its [PauseSpeakingAtBoundary] method. The system doesn’t call this method if the synthesizer is in a delay between utterances when speech pauses.
func (AVSpeechSynthesizerDelegateObject) SpeechSynthesizerDidStartSpeechUtterance ¶
func (o AVSpeechSynthesizerDelegateObject) SpeechSynthesizerDidStartSpeechUtterance(synthesizer IAVSpeechSynthesizer, utterance IAVSpeechUtterance)
Tells the delegate when the synthesizer begins speaking an utterance.
synthesizer: The speech synthesizer that starts speaking the utterance.
utterance: The utterance that the speech synthesizer starts speaking.
Discussion ¶
If the utterance’s [PreUtteranceDelay] property is greater than zero, the system calls this method after the delay completes and speech begins.
func (AVSpeechSynthesizerDelegateObject) SpeechSynthesizerWillSpeakMarkerUtterance ¶
func (o AVSpeechSynthesizerDelegateObject) SpeechSynthesizerWillSpeakMarkerUtterance(synthesizer IAVSpeechSynthesizer, marker IAVSpeechSynthesisMarker, utterance IAVSpeechUtterance)
Tells the delegate when the synthesizer is about to speak a marker of an utterance’s text.
synthesizer: The speech synthesizer that’s about to speak a marker of an utterance.
marker: The synthesized audio that the speech synthesizer is about to speak.
utterance: The utterance that the speech synthesizer pauses speaking.
func (AVSpeechSynthesizerDelegateObject) SpeechSynthesizerWillSpeakRangeOfSpeechStringUtterance ¶
func (o AVSpeechSynthesizerDelegateObject) SpeechSynthesizerWillSpeakRangeOfSpeechStringUtterance(synthesizer IAVSpeechSynthesizer, characterRange foundation.NSRange, utterance IAVSpeechUtterance)
Tells the delegate when the synthesizer is about to speak a portion of an utterance’s text.
synthesizer: The speech synthesizer that’s about to speak an utterance.
characterRange: The range of characters in the utterance’s [SpeechString] that correspond to the unit of speech the synthesizer is about to speak.
utterance: The utterance that the speech synthesizer is about to speak.
Discussion ¶
The system calls this method once for each unit of speech in the utterance’s text, which is generally a word.
type AVSpeechSynthesizerMarkerCallback ¶
type AVSpeechSynthesizerMarkerCallback = func([]AVSpeechSynthesisMarker)
AVSpeechSynthesizerMarkerCallback is a type that defines a callback that receives speech markers.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesizer/MarkerCallback
type AVSpeechUtterance ¶
type AVSpeechUtterance struct {
objectivec.Object
}
An object that encapsulates the text for speech synthesis and parameters that affect the speech.
Overview ¶
An AVSpeechUtterance is the basic unit of speech synthesis.
To synthesize speech, create an AVSpeechUtterance instance with text you want a speech synthesizer to speak. Optionally, change the AVSpeechUtterance.Voice, AVSpeechUtterance.PitchMultiplier, AVSpeechUtterance.Volume, AVSpeechUtterance.Rate, AVSpeechUtterance.PreUtteranceDelay, or AVSpeechUtterance.PostUtteranceDelay parameters for the utterance. Pass the utterance to an instance of AVSpeechSynthesizer to begin speech, or enqueue the utterance to speak later if the synthesizer is already speaking.
Split a body of text into multiple utterances if you want to apply different speech parameters. For example, you can emphasize a sentence by increasing the pitch and decreasing the rate of that utterance relative to others, or you can introduce pauses between sentences by putting each into an utterance with a leading or trailing delay.
Set and use the AVSpeechSynthesizerDelegate to receive notifications when the synthesizer starts or finishes speaking an utterance. Create an utterance for each meaningful unit in a body of text if you want to receive notifications as its speech progresses.
Creating an utterance ¶
- AVSpeechUtterance.InitWithString: Creates an utterance with the text string that you specify for the speech synthesizer to speak.
- AVSpeechUtterance.InitWithAttributedString: Creates an utterance with the attributed text string that you specify for the speech synthesizer to speak.
- AVSpeechUtterance.AVSpeechSynthesisIPANotationAttribute: A string that contains International Phonetic Alphabet (IPA) symbols the speech synthesizer uses to control pronunciation of certain words or phrases.
- AVSpeechUtterance.InitWithSSMLRepresentation: Creates a speech utterance with an Speech Synthesis Markup Language (SSML) string.
Configuring an utterance ¶
- AVSpeechUtterance.Voice: The voice the speech synthesizer uses when speaking the utterance.
- AVSpeechUtterance.SetVoice
- AVSpeechUtterance.PitchMultiplier: The baseline pitch the speech synthesizer uses when speaking the utterance.
- AVSpeechUtterance.SetPitchMultiplier
- AVSpeechUtterance.Volume: The volume the speech synthesizer uses when speaking the utterance.
- AVSpeechUtterance.SetVolume
- AVSpeechUtterance.PrefersAssistiveTechnologySettings: A Boolean that specifies whether assistive technology settings take precedence over the property values of this utterance.
- AVSpeechUtterance.SetPrefersAssistiveTechnologySettings
Configuring utterance timing ¶
- AVSpeechUtterance.Rate: The rate the speech synthesizer uses when speaking the utterance.
- AVSpeechUtterance.SetRate
- AVSpeechUtterance.AVSpeechUtteranceMinimumSpeechRate: The minimum rate the speech synthesizer uses when speaking an utterance.
- AVSpeechUtterance.AVSpeechUtteranceMaximumSpeechRate: The maximum rate the speech synthesizer uses when speaking an utterance.
- AVSpeechUtterance.AVSpeechUtteranceDefaultSpeechRate: The default rate the speech synthesizer uses when speaking an utterance.
- AVSpeechUtterance.PreUtteranceDelay: The amount of time the speech synthesizer pauses before speaking the utterance.
- AVSpeechUtterance.SetPreUtteranceDelay
- AVSpeechUtterance.PostUtteranceDelay: The amount of time the speech synthesizer pauses after speaking an utterance before handling the next utterance in the queue.
- AVSpeechUtterance.SetPostUtteranceDelay
Inspecting utterance text ¶
- AVSpeechUtterance.SpeechString: A string that contains the text for speech synthesis.
- AVSpeechUtterance.AttributedSpeechString: An attributed string that contains the text for speech synthesis.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechUtterance
func AVSpeechUtteranceFromID ¶
func AVSpeechUtteranceFromID(id objc.ID) AVSpeechUtterance
AVSpeechUtteranceFromID constructs a AVSpeechUtterance from an objc.ID.
An object that encapsulates the text for speech synthesis and parameters that affect the speech.
func NewAVSpeechUtterance ¶
func NewAVSpeechUtterance() AVSpeechUtterance
NewAVSpeechUtterance creates a new AVSpeechUtterance instance.
func NewSpeechUtteranceWithAttributedString ¶
func NewSpeechUtteranceWithAttributedString(string_ foundation.NSAttributedString) AVSpeechUtterance
Creates an utterance with the attributed text string that you specify for the speech synthesizer to speak.
string: A string that contains the text to speak.
Discussion ¶
To speak the text, pass the utterance to an instance of AVSpeechSynthesizer.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechUtterance/init(attributedString:)
func NewSpeechUtteranceWithSSMLRepresentation ¶
func NewSpeechUtteranceWithSSMLRepresentation(string_ string) AVSpeechUtterance
Creates a speech utterance with an Speech Synthesis Markup Language (SSML) string.
string: A string to speak that contains valid SSML markup. The initializer returns `nil` if you pass an invalid SSML string.
Discussion ¶
If using SSML to request voices that fall under certain attributes, the system may split a single utterance into multiple parts and send each to an appropriate synthesizer.
If no voice matches the properties, the utterance uses the voice set in its [Voice] property. If you don’t specify a voice, the system uses its default voice.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechUtterance/init(ssmlRepresentation:)
func NewSpeechUtteranceWithString ¶
func NewSpeechUtteranceWithString(string_ string) AVSpeechUtterance
Creates an utterance with the text string that you specify for the speech synthesizer to speak.
string: A string that contains the text to speak.
Return Value ¶
An AVSpeechUtterance object that can speak the specified text.
Discussion ¶
To speak the text, pass the utterance to an instance of AVSpeechSynthesizer.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechUtterance/init(string:)
func (AVSpeechUtterance) AVSpeechSynthesisIPANotationAttribute ¶
func (s AVSpeechUtterance) AVSpeechSynthesisIPANotationAttribute() string
A string that contains International Phonetic Alphabet (IPA) symbols the speech synthesizer uses to control pronunciation of certain words or phrases.
See: https://developer.apple.com/documentation/avfaudio/avspeechsynthesisipanotationattribute
func (AVSpeechUtterance) AVSpeechUtteranceDefaultSpeechRate ¶
func (s AVSpeechUtterance) AVSpeechUtteranceDefaultSpeechRate() float32
The default rate the speech synthesizer uses when speaking an utterance.
See: https://developer.apple.com/documentation/avfaudio/avspeechutterancedefaultspeechrate
func (AVSpeechUtterance) AVSpeechUtteranceMaximumSpeechRate ¶
func (s AVSpeechUtterance) AVSpeechUtteranceMaximumSpeechRate() float32
The maximum rate the speech synthesizer uses when speaking an utterance.
See: https://developer.apple.com/documentation/avfaudio/avspeechutterancemaximumspeechrate
func (AVSpeechUtterance) AVSpeechUtteranceMinimumSpeechRate ¶
func (s AVSpeechUtterance) AVSpeechUtteranceMinimumSpeechRate() float32
The minimum rate the speech synthesizer uses when speaking an utterance.
See: https://developer.apple.com/documentation/avfaudio/avspeechutteranceminimumspeechrate
func (AVSpeechUtterance) AttributedSpeechString ¶
func (s AVSpeechUtterance) AttributedSpeechString() foundation.NSAttributedString
An attributed string that contains the text for speech synthesis.
Discussion ¶
You can’t change an utterance’s text after initializaiton. If you want the speech synthesizer to speak different text, create a new utterance.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechUtterance/attributedSpeechString
func (AVSpeechUtterance) Autorelease ¶
func (s AVSpeechUtterance) Autorelease() AVSpeechUtterance
Autorelease adds the receiver to the current autorelease pool.
func (AVSpeechUtterance) EncodeWithCoder ¶
func (s AVSpeechUtterance) EncodeWithCoder(coder foundation.INSCoder)
func (AVSpeechUtterance) Init ¶
func (s AVSpeechUtterance) Init() AVSpeechUtterance
Init initializes the instance.
func (AVSpeechUtterance) InitWithAttributedString ¶
func (s AVSpeechUtterance) InitWithAttributedString(string_ foundation.NSAttributedString) AVSpeechUtterance
Creates an utterance with the attributed text string that you specify for the speech synthesizer to speak.
string: A string that contains the text to speak.
Discussion ¶
To speak the text, pass the utterance to an instance of AVSpeechSynthesizer.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechUtterance/init(attributedString:)
func (AVSpeechUtterance) InitWithSSMLRepresentation ¶
func (s AVSpeechUtterance) InitWithSSMLRepresentation(string_ string) AVSpeechUtterance
Creates a speech utterance with an Speech Synthesis Markup Language (SSML) string.
string: A string to speak that contains valid SSML markup. The initializer returns `nil` if you pass an invalid SSML string.
Discussion ¶
If using SSML to request voices that fall under certain attributes, the system may split a single utterance into multiple parts and send each to an appropriate synthesizer.
If no voice matches the properties, the utterance uses the voice set in its [Voice] property. If you don’t specify a voice, the system uses its default voice.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechUtterance/init(ssmlRepresentation:)
func (AVSpeechUtterance) InitWithString ¶
func (s AVSpeechUtterance) InitWithString(string_ string) AVSpeechUtterance
Creates an utterance with the text string that you specify for the speech synthesizer to speak.
string: A string that contains the text to speak.
Return Value ¶
An AVSpeechUtterance object that can speak the specified text.
Discussion ¶
To speak the text, pass the utterance to an instance of AVSpeechSynthesizer.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechUtterance/init(string:)
func (AVSpeechUtterance) PitchMultiplier ¶
func (s AVSpeechUtterance) PitchMultiplier() float32
The baseline pitch the speech synthesizer uses when speaking the utterance.
Discussion ¶
Before enqueing the utterance, set this property to a value within the range of `0.5` for lower pitch to `2.0` for higher pitch. The default value is `1.0`. Setting this after enqueing the utterance has no effect.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechUtterance/pitchMultiplier
func (AVSpeechUtterance) PostUtteranceDelay ¶
func (s AVSpeechUtterance) PostUtteranceDelay() float64
The amount of time the speech synthesizer pauses after speaking an utterance before handling the next utterance in the queue.
Discussion ¶
When multiple utterances exist in the queue, the speech synthesizer pauses a minimum amount of time equal to the sum of the current utterance’s `postUtteranceDelay` and the next utterance’s [PreUtteranceDelay].
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechUtterance/postUtteranceDelay
func (AVSpeechUtterance) PreUtteranceDelay ¶
func (s AVSpeechUtterance) PreUtteranceDelay() float64
The amount of time the speech synthesizer pauses before speaking the utterance.
Discussion ¶
When multiple utterances exist in the queue, the speech synthesizer pauses a minimum amount of time equal to the sum of the current utterance’s [PostUtteranceDelay] and the next utterance’s `preUtteranceDelay`.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechUtterance/preUtteranceDelay
func (AVSpeechUtterance) PrefersAssistiveTechnologySettings ¶
func (s AVSpeechUtterance) PrefersAssistiveTechnologySettings() bool
A Boolean that specifies whether assistive technology settings take precedence over the property values of this utterance.
Discussion ¶
If this property is `true`, but no assistive technology, such as VoiceOver, is on, the speech synthesizer uses the utterance property values.
func (AVSpeechUtterance) Rate ¶
func (s AVSpeechUtterance) Rate() float32
The rate the speech synthesizer uses when speaking the utterance.
Discussion ¶
The speech rate is a decimal representation within the range of AVSpeechUtteranceMinimumSpeechRate and AVSpeechUtteranceMaximumSpeechRate. Lower values correspond to slower speech, and higher values correspond to faster speech. The default value is AVSpeechUtteranceDefaultSpeechRate. Set this property before enqueing the utterance because setting it afterward has no effect.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechUtterance/rate
func (AVSpeechUtterance) SetPitchMultiplier ¶
func (s AVSpeechUtterance) SetPitchMultiplier(value float32)
func (AVSpeechUtterance) SetPostUtteranceDelay ¶
func (s AVSpeechUtterance) SetPostUtteranceDelay(value float64)
func (AVSpeechUtterance) SetPreUtteranceDelay ¶
func (s AVSpeechUtterance) SetPreUtteranceDelay(value float64)
func (AVSpeechUtterance) SetPrefersAssistiveTechnologySettings ¶
func (s AVSpeechUtterance) SetPrefersAssistiveTechnologySettings(value bool)
func (AVSpeechUtterance) SetRate ¶
func (s AVSpeechUtterance) SetRate(value float32)
func (AVSpeechUtterance) SetVoice ¶
func (s AVSpeechUtterance) SetVoice(value IAVSpeechSynthesisVoice)
func (AVSpeechUtterance) SetVolume ¶
func (s AVSpeechUtterance) SetVolume(value float32)
func (AVSpeechUtterance) SpeechString ¶
func (s AVSpeechUtterance) SpeechString() string
A string that contains the text for speech synthesis.
Discussion ¶
You can’t change an utterance’s text after initializaiton. If you want the speech synthesizer to speak different text, create a new utterance.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechUtterance/speechString
func (AVSpeechUtterance) Voice ¶
func (s AVSpeechUtterance) Voice() IAVSpeechSynthesisVoice
The voice the speech synthesizer uses when speaking the utterance.
Discussion ¶
If you don’t specify a voice, the speech synthesizer uses the system’s default voice to speak the utterance.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechUtterance/voice
func (AVSpeechUtterance) Volume ¶
func (s AVSpeechUtterance) Volume() float32
The volume the speech synthesizer uses when speaking the utterance.
Discussion ¶
Before enqueing the utterance, set this property to a value within the range of `0.0` for silent to `1.0` for loudest volume. The default value is `1.0`. Setting this after enqueing the utterance has no effect.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechUtterance/volume
type AVSpeechUtteranceClass ¶
type AVSpeechUtteranceClass struct {
// contains filtered or unexported fields
}
func GetAVSpeechUtteranceClass ¶
func GetAVSpeechUtteranceClass() AVSpeechUtteranceClass
GetAVSpeechUtteranceClass returns the class object for AVSpeechUtterance.
func (AVSpeechUtteranceClass) Alloc ¶
func (ac AVSpeechUtteranceClass) Alloc() AVSpeechUtterance
Alloc allocates memory for a new instance of the class.
func (AVSpeechUtteranceClass) Class ¶
func (ac AVSpeechUtteranceClass) Class() objc.Class
Class returns the underlying Objective-C class pointer.
func (AVSpeechUtteranceClass) SpeechUtteranceWithAttributedString ¶
func (_AVSpeechUtteranceClass AVSpeechUtteranceClass) SpeechUtteranceWithAttributedString(string_ foundation.NSAttributedString) AVSpeechUtterance
Creates an utterance with the attributed text string that you specify for the speech synthesizer to speak.
string: A string that contains the text to speak.
Return Value ¶
An AVSpeechUtterance object that can speak the specified text.
Discussion ¶
To speak the text, pass the utterance to an instance of AVSpeechSynthesizer.
func (AVSpeechUtteranceClass) SpeechUtteranceWithSSMLRepresentation ¶
func (_AVSpeechUtteranceClass AVSpeechUtteranceClass) SpeechUtteranceWithSSMLRepresentation(string_ string) AVSpeechUtterance
Returns a new speech utterance with an Speech Synthesis Markup Language (SSML) string.
string: A string to speak that contains valid SSML markup. The initializer returns `nil` if you pass an invalid SSML string.
Return Value ¶
A new speech utterance, or nil if the SSML string is invalid.
Discussion ¶
If using SSML to request voices that fall under certain attributes, the system may split a single utterance into multiple parts and send each to an appropriate synthesizer.
If no voice matches the properties, the utterance uses the voice set in its [Voice] property. If you don’t specify a voice, the system uses its default voice.
func (AVSpeechUtteranceClass) SpeechUtteranceWithString ¶
func (_AVSpeechUtteranceClass AVSpeechUtteranceClass) SpeechUtteranceWithString(string_ string) AVSpeechUtterance
Creates an utterance with the text string that you specify for the speech synthesizer to speak.
string: A string that contains the text to speak.
Return Value ¶
An AVSpeechUtterance object that can speak the specified text.
Discussion ¶
To speak the text, pass the utterance to an instance of AVSpeechSynthesizer.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechUtterance/speechUtteranceWithString:
type AvaudiosessioninterruptionflagsShouldresume ¶
type AvaudiosessioninterruptionflagsShouldresume uint
const ( // Deprecated. AVAudioSessionInterruptionFlags_ShouldResume AvaudiosessioninterruptionflagsShouldresume = 1 )
func (AvaudiosessioninterruptionflagsShouldresume) String ¶
func (e AvaudiosessioninterruptionflagsShouldresume) String() string
type AvaudiosessionsetactiveflagsNotifyothersondeactivation ¶
type AvaudiosessionsetactiveflagsNotifyothersondeactivation uint
const ( // Deprecated. AVAudioSessionSetActiveFlags_NotifyOthersOnDeactivation AvaudiosessionsetactiveflagsNotifyothersondeactivation = 1 )
func (AvaudiosessionsetactiveflagsNotifyothersondeactivation) String ¶
func (e AvaudiosessionsetactiveflagsNotifyothersondeactivation) String() string
type BoolErrorHandler ¶
BoolErrorHandler handles A completion handler the system calls asynchronously when the system completes audio routing arbitration.
- defaultDeviceChanged: A Boolean value that indicates whether the system switched the AirPods to the macOS device.
- error: An error object that indicates why the request failed, or [nil](<doc://com.apple.documentation/documentation/ObjectiveC/nil-227m0>) if the request succeeded.
The error can be type-asserted to *foundation.NSError for Domain, Code, and UserInfo.
Used by:
type BoolHandler ¶
type BoolHandler = func(bool)
BoolHandler handles A Boolean value that indicates whether the user grants the app permission to record audio.
Used by:
- [AVAudioApplication.RequestRecordPermissionWithCompletionHandler]
- AVAudioApplication.SetInputMuteStateChangeHandlerError
type ErrorHandler ¶
type ErrorHandler = func()
ErrorHandler handles The handler the system calls after the player schedules the file for playback on the render thread, or the player stops.
Used by:
- AVAudioPlayerNode.ScheduleBufferAtTimeOptionsCompletionCallbackTypeCompletionHandler
- AVAudioPlayerNode.ScheduleBufferAtTimeOptionsCompletionHandler
- AVAudioPlayerNode.ScheduleBufferCompletionCallbackTypeCompletionHandler
- AVAudioPlayerNode.ScheduleBufferCompletionHandler
- AVAudioPlayerNode.ScheduleFileAtTimeCompletionCallbackTypeCompletionHandler
- AVAudioPlayerNode.ScheduleFileAtTimeCompletionHandler
- AVAudioPlayerNode.ScheduleSegmentStartingFrameFrameCountAtTimeCompletionCallbackTypeCompletionHandler
- AVAudioPlayerNode.ScheduleSegmentStartingFrameFrameCountAtTimeCompletionHandler
- AVMIDIPlayer.Play
type IAVAUPresetEvent ¶
type IAVAUPresetEvent interface {
IAVMusicEvent
// Creates an event with the scope, element, and dictionary for the preset.
InitWithScopeElementDictionary(scope uint32, element uint32, presetDictionary foundation.INSDictionary) AVAUPresetEvent
// The audio unit scope.
Scope() uint32
SetScope(value uint32)
// The element index in the scope.
Element() uint32
SetElement(value uint32)
// The dictionary that contains the preset.
PresetDictionary() foundation.INSDictionary
}
An interface definition for the AVAUPresetEvent class.
Creating a Preset Event ¶
- [IAVAUPresetEvent.InitWithScopeElementDictionary]: Creates an event with the scope, element, and dictionary for the preset.
Configuring a Preset Event ¶
- [IAVAUPresetEvent.Scope]: The audio unit scope.
- [IAVAUPresetEvent.SetScope]
- [IAVAUPresetEvent.Element]: The element index in the scope.
- [IAVAUPresetEvent.SetElement]
- [IAVAUPresetEvent.PresetDictionary]: The dictionary that contains the preset.
See: https://developer.apple.com/documentation/AVFAudio/AVAUPresetEvent
type IAVAudioApplication ¶
type IAVAudioApplication interface {
objectivec.IObject
// The app’s permission to record audio.
RecordPermission() AVAudioApplicationRecordPermission
// A Boolean value that indicates whether the app’s audio input is in a muted state.
InputMuted() bool
// Sets a Boolean value that indicates whether the app’s audio input is in a muted state.
SetInputMutedError(muted bool) (bool, error)
// Sets a callback to handle changes to application-level audio muting states.
SetInputMuteStateChangeHandlerError(inputMuteHandler func(bool) bool) (bool, error)
}
An interface definition for the AVAudioApplication class.
Requesting audio recording permission ¶
- [IAVAudioApplication.RecordPermission]: The app’s permission to record audio.
Managing audio input mute state ¶
- [IAVAudioApplication.InputMuted]: A Boolean value that indicates whether the app’s audio input is in a muted state.
- [IAVAudioApplication.SetInputMutedError]: Sets a Boolean value that indicates whether the app’s audio input is in a muted state.
- [IAVAudioApplication.SetInputMuteStateChangeHandlerError]: Sets a callback to handle changes to application-level audio muting states.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioApplication
type IAVAudioBuffer ¶
type IAVAudioBuffer interface {
objectivec.IObject
// The format of the audio in the buffer.
Format() IAVAudioFormat
// The buffer’s underlying audio buffer list.
AudioBufferList() objectivec.IObject
// A mutable version of the buffer’s underlying audio buffer list.
MutableAudioBufferList() objectivec.IObject
}
An interface definition for the AVAudioBuffer class.
Getting the Buffer Format ¶
- [IAVAudioBuffer.Format]: The format of the audio in the buffer.
Getting the Audio Buffers ¶
- [IAVAudioBuffer.AudioBufferList]: The buffer’s underlying audio buffer list.
- [IAVAudioBuffer.MutableAudioBufferList]: A mutable version of the buffer’s underlying audio buffer list.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioBuffer
type IAVAudioChannelLayout ¶
type IAVAudioChannelLayout interface {
objectivec.IObject
// Creates an audio channel layout object from an existing one.
InitWithLayout(layout IAVAudioChannelLayout) AVAudioChannelLayout
// Creates an audio channel layout object from a layout tag.
InitWithLayoutTag(layoutTag objectivec.IObject) AVAudioChannelLayout
// The number of channels of audio data.
ChannelCount() AVAudioChannelCount
// The underlying audio channel layout.
Layout() IAVAudioChannelLayout
// The audio channel’s underlying layout tag.
LayoutTag() objectivec.IObject
AVChannelLayoutKey() string
EncodeWithCoder(coder foundation.INSCoder)
}
An interface definition for the AVAudioChannelLayout class.
Creating an Audio Channel Layout ¶
- [IAVAudioChannelLayout.InitWithLayout]: Creates an audio channel layout object from an existing one.
- [IAVAudioChannelLayout.InitWithLayoutTag]: Creates an audio channel layout object from a layout tag.
Getting Audio Channel Layout Properties ¶
- [IAVAudioChannelLayout.ChannelCount]: The number of channels of audio data.
- [IAVAudioChannelLayout.Layout]: The underlying audio channel layout.
- [IAVAudioChannelLayout.LayoutTag]: The audio channel’s underlying layout tag.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioChannelLayout
type IAVAudioCompressedBuffer ¶
type IAVAudioCompressedBuffer interface {
IAVAudioBuffer
// Creates a buffer that contains constant bytes per packet of audio data in a compressed state.
InitWithFormatPacketCapacity(format IAVAudioFormat, packetCapacity AVAudioPacketCount) AVAudioCompressedBuffer
// Creates a buffer that contains audio data in a compressed state.
InitWithFormatPacketCapacityMaximumPacketSize(format IAVAudioFormat, packetCapacity AVAudioPacketCount, maximumPacketSize int) AVAudioCompressedBuffer
// The number of packets the buffer contains.
ByteCapacity() uint32
// The number of valid bytes in the buffer.
ByteLength() uint32
SetByteLength(value uint32)
// The audio buffer’s data bytes.
Data() unsafe.Pointer
// The maximum size of a packet, in bytes.
MaximumPacketSize() int
// The total number of packets that the buffer can contain.
PacketCapacity() AVAudioPacketCount
// The number of packets currently in the buffer.
PacketCount() AVAudioPacketCount
SetPacketCount(value AVAudioPacketCount)
// The buffer’s array of packet descriptions.
PacketDescriptions() objectivec.IObject
// The buffer’s array of packet dependencies.
PacketDependencies() objectivec.IObject
}
An interface definition for the AVAudioCompressedBuffer class.
Creating an Audio Buffer ¶
- [IAVAudioCompressedBuffer.InitWithFormatPacketCapacity]: Creates a buffer that contains constant bytes per packet of audio data in a compressed state.
- [IAVAudioCompressedBuffer.InitWithFormatPacketCapacityMaximumPacketSize]: Creates a buffer that contains audio data in a compressed state.
Getting Audio Buffer Properties ¶
- [IAVAudioCompressedBuffer.ByteCapacity]: The number of packets the buffer contains.
- [IAVAudioCompressedBuffer.ByteLength]: The number of valid bytes in the buffer.
- [IAVAudioCompressedBuffer.SetByteLength]
- [IAVAudioCompressedBuffer.Data]: The audio buffer’s data bytes.
- [IAVAudioCompressedBuffer.MaximumPacketSize]: The maximum size of a packet, in bytes.
- [IAVAudioCompressedBuffer.PacketCapacity]: The total number of packets that the buffer can contain.
- [IAVAudioCompressedBuffer.PacketCount]: The number of packets currently in the buffer.
- [IAVAudioCompressedBuffer.SetPacketCount]
- [IAVAudioCompressedBuffer.PacketDescriptions]: The buffer’s array of packet descriptions.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioCompressedBuffer
type IAVAudioConnectionPoint ¶
type IAVAudioConnectionPoint interface {
objectivec.IObject
// Creates a connection point object.
InitWithNodeBus(node IAVAudioNode, bus AVAudioNodeBus) AVAudioConnectionPoint
// Returns connection information about a node’s input bus.
InputConnectionPointForNodeInputBus(node IAVAudioNode, bus AVAudioNodeBus) IAVAudioConnectionPoint
// Returns connection information about a node’s output bus.
OutputConnectionPointsForNodeOutputBus(node IAVAudioNode, bus AVAudioNodeBus) []AVAudioConnectionPoint
// The bus on the node in the connection point.
Bus() AVAudioNodeBus
// The node in the connection point.
Node() IAVAudioNode
}
An interface definition for the AVAudioConnectionPoint class.
Creating a Connection Point ¶
- [IAVAudioConnectionPoint.InitWithNodeBus]: Creates a connection point object.
Getting Connection Point Properties ¶
- [IAVAudioConnectionPoint.InputConnectionPointForNodeInputBus]: Returns connection information about a node’s input bus.
- [IAVAudioConnectionPoint.OutputConnectionPointsForNodeOutputBus]: Returns connection information about a node’s output bus.
- [IAVAudioConnectionPoint.Bus]: The bus on the node in the connection point.
- [IAVAudioConnectionPoint.Node]: The node in the connection point.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConnectionPoint
type IAVAudioConverter ¶
type IAVAudioConverter interface {
objectivec.IObject
// Creates an audio converter object from the specified input and output formats.
InitFromFormatToFormat(fromFormat IAVAudioFormat, toFormat IAVAudioFormat) AVAudioConverter
// Performs a conversion between audio formats, if the system supports it.
ConvertToBufferErrorWithInputFromBlock(outputBuffer IAVAudioBuffer, outError foundation.INSError, inputBlock AVAudioConverterInputBlock) AVAudioConverterOutputStatus
// Performs a basic conversion between audio formats that doesn’t involve converting codecs or sample rates.
ConvertToBufferFromBufferError(outputBuffer IAVAudioPCMBuffer, inputBuffer IAVAudioPCMBuffer) (bool, error)
// Resets the converter so you can convert a new audio stream.
Reset()
// An array of integers that indicates which input to derive each output from.
ChannelMap() []foundation.NSNumber
SetChannelMap(value []foundation.NSNumber)
// A Boolean value that indicates whether dither is on.
Dither() bool
SetDither(value bool)
// A Boolean value that indicates whether the framework mixes the channels instead of remapping.
Downmix() bool
SetDownmix(value bool)
// The format of the input audio stream.
InputFormat() IAVAudioFormat
// The format of the output audio stream.
OutputFormat() IAVAudioFormat
// An object that contains metadata for encoders and decoders.
MagicCookie() foundation.INSData
SetMagicCookie(value foundation.INSData)
// The maximum size of an output packet, in bytes.
MaximumOutputPacketSize() int
// An array of bit rates the framework applies during encoding according to the current formats and settings.
ApplicableEncodeBitRates() []foundation.NSNumber
// An array of all bit rates the codec provides when encoding.
AvailableEncodeBitRates() []foundation.NSNumber
// An array of all output channel layout tags the codec provides when encoding.
AvailableEncodeChannelLayoutTags() []foundation.NSNumber
// The bit rate, in bits per second.
BitRate() int
SetBitRate(value int)
// A key value constant the framework uses during encoding.
BitRateStrategy() string
SetBitRateStrategy(value string)
// A sample rate converter algorithm key value.
SampleRateConverterQuality() int
SetSampleRateConverterQuality(value int)
// The priming method the sample rate converter or decoder uses.
SampleRateConverterAlgorithm() string
SetSampleRateConverterAlgorithm(value string)
// An array of output sample rates that the converter applies according to the current formats and settings, when encoding.
ApplicableEncodeSampleRates() []foundation.NSNumber
// An array of all output sample rates the codec provides when encoding.
AvailableEncodeSampleRates() []foundation.NSNumber
// The number of priming frames the converter uses.
PrimeInfo() AVAudioConverterPrimeInfo
SetPrimeInfo(value AVAudioConverterPrimeInfo)
// The priming method the sample rate converter or decoder uses.
PrimeMethod() AVAudioConverterPrimeMethod
SetPrimeMethod(value AVAudioConverterPrimeMethod)
AudioSyncPacketFrequency() int
SetAudioSyncPacketFrequency(value int)
ContentSource() AVAudioContentSource
SetContentSource(value AVAudioContentSource)
DynamicRangeControlConfiguration() AVAudioDynamicRangeControlConfiguration
SetDynamicRangeControlConfiguration(value AVAudioDynamicRangeControlConfiguration)
}
An interface definition for the AVAudioConverter class.
Creating an Audio Converter ¶
- [IAVAudioConverter.InitFromFormatToFormat]: Creates an audio converter object from the specified input and output formats.
Converting Audio Formats ¶
- [IAVAudioConverter.ConvertToBufferErrorWithInputFromBlock]: Performs a conversion between audio formats, if the system supports it.
- [IAVAudioConverter.ConvertToBufferFromBufferError]: Performs a basic conversion between audio formats that doesn’t involve converting codecs or sample rates.
Resetting an Audio Converter ¶
- [IAVAudioConverter.Reset]: Resets the converter so you can convert a new audio stream.
Getting Audio Converter Properties ¶
- [IAVAudioConverter.ChannelMap]: An array of integers that indicates which input to derive each output from.
- [IAVAudioConverter.SetChannelMap]
- [IAVAudioConverter.Dither]: A Boolean value that indicates whether dither is on.
- [IAVAudioConverter.SetDither]
- [IAVAudioConverter.Downmix]: A Boolean value that indicates whether the framework mixes the channels instead of remapping.
- [IAVAudioConverter.SetDownmix]
- [IAVAudioConverter.InputFormat]: The format of the input audio stream.
- [IAVAudioConverter.OutputFormat]: The format of the output audio stream.
- [IAVAudioConverter.MagicCookie]: An object that contains metadata for encoders and decoders.
- [IAVAudioConverter.SetMagicCookie]
- [IAVAudioConverter.MaximumOutputPacketSize]: The maximum size of an output packet, in bytes.
Getting Bit Rate Properties ¶
- [IAVAudioConverter.ApplicableEncodeBitRates]: An array of bit rates the framework applies during encoding according to the current formats and settings.
- [IAVAudioConverter.AvailableEncodeBitRates]: An array of all bit rates the codec provides when encoding.
- [IAVAudioConverter.AvailableEncodeChannelLayoutTags]: An array of all output channel layout tags the codec provides when encoding.
- [IAVAudioConverter.BitRate]: The bit rate, in bits per second.
- [IAVAudioConverter.SetBitRate]
- [IAVAudioConverter.BitRateStrategy]: A key value constant the framework uses during encoding.
- [IAVAudioConverter.SetBitRateStrategy]
Getting Sample Rate Properties ¶
- [IAVAudioConverter.SampleRateConverterQuality]: A sample rate converter algorithm key value.
- [IAVAudioConverter.SetSampleRateConverterQuality]
- [IAVAudioConverter.SampleRateConverterAlgorithm]: The priming method the sample rate converter or decoder uses.
- [IAVAudioConverter.SetSampleRateConverterAlgorithm]
- [IAVAudioConverter.ApplicableEncodeSampleRates]: An array of output sample rates that the converter applies according to the current formats and settings, when encoding.
- [IAVAudioConverter.AvailableEncodeSampleRates]: An array of all output sample rates the codec provides when encoding.
Getting Priming Information ¶
- [IAVAudioConverter.PrimeInfo]: The number of priming frames the converter uses.
- [IAVAudioConverter.SetPrimeInfo]
- [IAVAudioConverter.PrimeMethod]: The priming method the sample rate converter or decoder uses.
- [IAVAudioConverter.SetPrimeMethod]
Managing packet dependencies ¶
- [IAVAudioConverter.AudioSyncPacketFrequency]
- [IAVAudioConverter.SetAudioSyncPacketFrequency]
- [IAVAudioConverter.ContentSource]
- [IAVAudioConverter.SetContentSource]
- [IAVAudioConverter.DynamicRangeControlConfiguration]
- [IAVAudioConverter.SetDynamicRangeControlConfiguration]
See: https://developer.apple.com/documentation/AVFAudio/AVAudioConverter
type IAVAudioEngine ¶
type IAVAudioEngine interface {
objectivec.IObject
// Attaches an audio node to the audio engine.
AttachNode(node IAVAudioNode)
// Detaches an audio node from the audio engine.
DetachNode(node IAVAudioNode)
// A read-only set that contains the nodes you attach to the audio engine.
AttachedNodes() foundation.INSSet
// The audio engine’s singleton input audio node.
InputNode() IAVAudioInputNode
// The audio engine’s singleton output audio node.
OutputNode() IAVAudioOutputNode
// The audio engine’s optional singleton main mixer node.
MainMixerNode() IAVAudioMixerNode
// Establishes a connection between two nodes.
ConnectToFormat(node1 IAVAudioNode, node2 IAVAudioNode, format IAVAudioFormat)
// Establishes a connection between two nodes, specifying the input and output busses.
ConnectToFromBusToBusFormat(node1 IAVAudioNode, node2 IAVAudioNode, bus1 AVAudioNodeBus, bus2 AVAudioNodeBus, format IAVAudioFormat)
// Removes all input connections of the node.
DisconnectNodeInput(node IAVAudioNode)
// Removes the input connection of a node on the specified bus.
DisconnectNodeInputBus(node IAVAudioNode, bus AVAudioNodeBus)
// Removes all output connections of a node.
DisconnectNodeOutput(node IAVAudioNode)
// Removes the output connection of a node on the specified bus.
DisconnectNodeOutputBus(node IAVAudioNode, bus AVAudioNodeBus)
// Establishes a MIDI connection between two nodes.
ConnectMIDIToFormatEventListBlock(sourceNode IAVAudioNode, destinationNode IAVAudioNode, format IAVAudioFormat, tapBlock objectivec.IObject)
// Establishes a MIDI connection between a source node and multiple destination nodes.
ConnectMIDIToNodesFormatEventListBlock(sourceNode IAVAudioNode, destinationNodes []AVAudioNode, format IAVAudioFormat, tapBlock objectivec.IObject)
// Removes a MIDI connection between two nodes.
DisconnectMIDIFrom(sourceNode IAVAudioNode, destinationNode IAVAudioNode)
// Removes a MIDI connection between one source node and multiple destination nodes.
DisconnectMIDIFromNodes(sourceNode IAVAudioNode, destinationNodes []AVAudioNode)
// Disconnects all input MIDI connections from a node.
DisconnectMIDIInput(node IAVAudioNode)
// Disconnects all output MIDI connections from a node.
DisconnectMIDIOutput(node IAVAudioNode)
// Prepares the audio engine for starting.
Prepare()
// Starts the audio engine.
StartAndReturnError() (bool, error)
// A Boolean value that indicates whether the audio engine is running.
Running() bool
// Pauses the audio engine.
Pause()
// Stops the audio engine and releases any previously prepared resources.
Stop()
// Resets all audio nodes in the audio engine.
Reset()
// The music sequence instance that you attach to the audio engine, if any.
MusicSequence() objectivec.IObject
SetMusicSequence(value objectivec.IObject)
// Sets the engine to operate in manual rendering mode with the render format and maximum frame count you specify.
EnableManualRenderingModeFormatMaximumFrameCountError(mode AVAudioEngineManualRenderingMode, pcmFormat IAVAudioFormat, maximumFrameCount AVAudioFrameCount) (bool, error)
// Sets the engine to render to or from an audio device.
DisableManualRenderingMode()
// Makes a render call to the engine operating in the offline manual rendering mode.
RenderOfflineToBufferError(numberOfFrames AVAudioFrameCount, buffer IAVAudioPCMBuffer) (AVAudioEngineManualRenderingStatus, error)
// The block that renders the engine when operating in manual rendering mode.
ManualRenderingBlock() AVAudioEngineManualRenderingBlock
// The render format of the engine in manual rendering mode.
ManualRenderingFormat() IAVAudioFormat
// The maximum number of PCM sample frames the engine produces in any single render call in manual rendering mode.
ManualRenderingMaximumFrameCount() AVAudioFrameCount
// The manual rendering mode configured on the engine.
ManualRenderingMode() AVAudioEngineManualRenderingMode
// An indication of where the engine is on its render timeline in manual rendering mode.
ManualRenderingSampleTime() AVAudioFramePosition
// A Boolean value that indicates whether autoshutdown is in an enabled state.
AutoShutdownEnabled() bool
SetAutoShutdownEnabled(value bool)
// A Boolean value that indicates whether the engine is operating in manual rendering mode.
IsInManualRenderingMode() bool
// Establishes a connection between a source node and multiple destination nodes.
ConnectToConnectionPointsFromBusFormat(sourceNode IAVAudioNode, destNodes []AVAudioConnectionPoint, sourceBus AVAudioNodeBus, format IAVAudioFormat)
// Returns connection information about a node’s input bus.
InputConnectionPointForNodeInputBus(node IAVAudioNode, bus AVAudioNodeBus) IAVAudioConnectionPoint
// Returns connection information about a node’s output bus.
OutputConnectionPointsForNodeOutputBus(node IAVAudioNode, bus AVAudioNodeBus) []AVAudioConnectionPoint
}
An interface definition for the AVAudioEngine class.
Attaching and Detaching Audio Nodes ¶
- [IAVAudioEngine.AttachNode]: Attaches an audio node to the audio engine.
- [IAVAudioEngine.DetachNode]: Detaches an audio node from the audio engine.
- [IAVAudioEngine.AttachedNodes]: A read-only set that contains the nodes you attach to the audio engine.
Getting the Input, Output, and Main Mixer Nodes ¶
- [IAVAudioEngine.InputNode]: The audio engine’s singleton input audio node.
- [IAVAudioEngine.OutputNode]: The audio engine’s singleton output audio node.
- [IAVAudioEngine.MainMixerNode]: The audio engine’s optional singleton main mixer node.
Connecting and Disconnecting Audio Nodes ¶
- [IAVAudioEngine.ConnectToFormat]: Establishes a connection between two nodes.
- [IAVAudioEngine.ConnectToFromBusToBusFormat]: Establishes a connection between two nodes, specifying the input and output busses.
- [IAVAudioEngine.DisconnectNodeInput]: Removes all input connections of the node.
- [IAVAudioEngine.DisconnectNodeInputBus]: Removes the input connection of a node on the specified bus.
- [IAVAudioEngine.DisconnectNodeOutput]: Removes all output connections of a node.
- [IAVAudioEngine.DisconnectNodeOutputBus]: Removes the output connection of a node on the specified bus.
Managing MIDI Nodes ¶
- [IAVAudioEngine.ConnectMIDIToFormatEventListBlock]: Establishes a MIDI connection between two nodes.
- [IAVAudioEngine.ConnectMIDIToNodesFormatEventListBlock]: Establishes a MIDI connection between a source node and multiple destination nodes.
- [IAVAudioEngine.DisconnectMIDIFrom]: Removes a MIDI connection between two nodes.
- [IAVAudioEngine.DisconnectMIDIFromNodes]: Removes a MIDI connection between one source node and multiple destination nodes.
- [IAVAudioEngine.DisconnectMIDIInput]: Disconnects all input MIDI connections from a node.
- [IAVAudioEngine.DisconnectMIDIOutput]: Disconnects all output MIDI connections from a node.
Playing Audio ¶
- [IAVAudioEngine.Prepare]: Prepares the audio engine for starting.
- [IAVAudioEngine.StartAndReturnError]: Starts the audio engine.
- [IAVAudioEngine.Running]: A Boolean value that indicates whether the audio engine is running.
- [IAVAudioEngine.Pause]: Pauses the audio engine.
- [IAVAudioEngine.Stop]: Stops the audio engine and releases any previously prepared resources.
- [IAVAudioEngine.Reset]: Resets all audio nodes in the audio engine.
- [IAVAudioEngine.MusicSequence]: The music sequence instance that you attach to the audio engine, if any.
- [IAVAudioEngine.SetMusicSequence]
Manually Rendering an Audio Engine ¶
- [IAVAudioEngine.EnableManualRenderingModeFormatMaximumFrameCountError]: Sets the engine to operate in manual rendering mode with the render format and maximum frame count you specify.
- [IAVAudioEngine.DisableManualRenderingMode]: Sets the engine to render to or from an audio device.
- [IAVAudioEngine.RenderOfflineToBufferError]: Makes a render call to the engine operating in the offline manual rendering mode.
Getting Manual Rendering Properties ¶
- [IAVAudioEngine.ManualRenderingBlock]: The block that renders the engine when operating in manual rendering mode.
- [IAVAudioEngine.ManualRenderingFormat]: The render format of the engine in manual rendering mode.
- [IAVAudioEngine.ManualRenderingMaximumFrameCount]: The maximum number of PCM sample frames the engine produces in any single render call in manual rendering mode.
- [IAVAudioEngine.ManualRenderingMode]: The manual rendering mode configured on the engine.
- [IAVAudioEngine.ManualRenderingSampleTime]: An indication of where the engine is on its render timeline in manual rendering mode.
- [IAVAudioEngine.AutoShutdownEnabled]: A Boolean value that indicates whether autoshutdown is in an enabled state.
- [IAVAudioEngine.SetAutoShutdownEnabled]
- [IAVAudioEngine.IsInManualRenderingMode]: A Boolean value that indicates whether the engine is operating in manual rendering mode.
Using Connection Points ¶
- [IAVAudioEngine.ConnectToConnectionPointsFromBusFormat]: Establishes a connection between a source node and multiple destination nodes.
- [IAVAudioEngine.InputConnectionPointForNodeInputBus]: Returns connection information about a node’s input bus.
- [IAVAudioEngine.OutputConnectionPointsForNodeOutputBus]: Returns connection information about a node’s output bus.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEngine
type IAVAudioEnvironmentDistanceAttenuationParameters ¶
type IAVAudioEnvironmentDistanceAttenuationParameters interface {
objectivec.IObject
// The distance attenuation model that describes the drop-off in gain as the source moves away from the listener.
DistanceAttenuationModel() AVAudioEnvironmentDistanceAttenuationModel
SetDistanceAttenuationModel(value AVAudioEnvironmentDistanceAttenuationModel)
// The distance beyond which the node applies no further attenuation, in meters.
MaximumDistance() float32
SetMaximumDistance(value float32)
// The minimum distance at which the node applies attenuation, in meters.
ReferenceDistance() float32
SetReferenceDistance(value float32)
// A factor that determines the attenuation curve.
RolloffFactor() float32
SetRolloffFactor(value float32)
}
An interface definition for the AVAudioEnvironmentDistanceAttenuationParameters class.
Getting and Setting the Attenuation Model ¶
- [IAVAudioEnvironmentDistanceAttenuationParameters.DistanceAttenuationModel]: The distance attenuation model that describes the drop-off in gain as the source moves away from the listener.
- [IAVAudioEnvironmentDistanceAttenuationParameters.SetDistanceAttenuationModel]
Getting and Setting the Attenuation Values ¶
- [IAVAudioEnvironmentDistanceAttenuationParameters.MaximumDistance]: The distance beyond which the node applies no further attenuation, in meters.
- [IAVAudioEnvironmentDistanceAttenuationParameters.SetMaximumDistance]
- [IAVAudioEnvironmentDistanceAttenuationParameters.ReferenceDistance]: The minimum distance at which the node applies attenuation, in meters.
- [IAVAudioEnvironmentDistanceAttenuationParameters.SetReferenceDistance]
- [IAVAudioEnvironmentDistanceAttenuationParameters.RolloffFactor]: A factor that determines the attenuation curve.
- [IAVAudioEnvironmentDistanceAttenuationParameters.SetRolloffFactor]
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEnvironmentDistanceAttenuationParameters
type IAVAudioEnvironmentNode ¶
type IAVAudioEnvironmentNode interface {
IAVAudioNode
AVAudio3DMixing
AVAudioMixing
AVAudioStereoMixing
// The listener’s position in the 3D environment.
ListenerPosition() AVAudio3DPoint
SetListenerPosition(value AVAudio3DPoint)
// The listener’s angular orientation in the environment.
ListenerAngularOrientation() AVAudio3DAngularOrientation
SetListenerAngularOrientation(value AVAudio3DAngularOrientation)
// The listener’s vector orientation in the environment.
ListenerVectorOrientation() AVAudio3DVectorOrientation
SetListenerVectorOrientation(value AVAudio3DVectorOrientation)
// The distance attenuation parameters for the environment.
DistanceAttenuationParameters() IAVAudioEnvironmentDistanceAttenuationParameters
// The reverb parameters for the environment.
ReverbParameters() IAVAudioEnvironmentReverbParameters
// The mixer’s output volume.
OutputVolume() float32
SetOutputVolume(value float32)
// The type of output hardware.
OutputType() AVAudioEnvironmentOutputType
SetOutputType(value AVAudioEnvironmentOutputType)
// An array of rendering algorithms applicable to the environment node.
ApplicableRenderingAlgorithms() []foundation.NSNumber
// A Boolean value that indicates whether the listener orientation is automatically rotated based on head orientation.
ListenerHeadTrackingEnabled() bool
SetListenerHeadTrackingEnabled(value bool)
// An unused input bus.
NextAvailableInputBus() AVAudioNodeBus
// A quadraphonic symmetrical layout, recommended for use by audio units.
KAudioChannelLayoutTag_AudioUnit_4() objectivec.IObject
SetKAudioChannelLayoutTag_AudioUnit_4(value objectivec.IObject)
// A 5-channel surround-based layout, recommended for use by audio units.
KAudioChannelLayoutTag_AudioUnit_5_0() objectivec.IObject
SetKAudioChannelLayoutTag_AudioUnit_5_0(value objectivec.IObject)
// A 6-channel surround-based layout, recommended for use by audio units.
KAudioChannelLayoutTag_AudioUnit_6_0() objectivec.IObject
SetKAudioChannelLayoutTag_AudioUnit_6_0(value objectivec.IObject)
// A 7-channel surround-based layout, recommended for use by audio units.
KAudioChannelLayoutTag_AudioUnit_7_0() objectivec.IObject
SetKAudioChannelLayoutTag_AudioUnit_7_0(value objectivec.IObject)
// An alternate 7-channel surround-based layout, for use by audio units.
KAudioChannelLayoutTag_AudioUnit_7_0_Front() objectivec.IObject
SetKAudioChannelLayoutTag_AudioUnit_7_0_Front(value objectivec.IObject)
// An octagonal symmetrical layout, recommended for use by audio units.
KAudioChannelLayoutTag_AudioUnit_8() objectivec.IObject
SetKAudioChannelLayoutTag_AudioUnit_8(value objectivec.IObject)
}
An interface definition for the AVAudioEnvironmentNode class.
Getting and Setting Positional Properties ¶
- [IAVAudioEnvironmentNode.ListenerPosition]: The listener’s position in the 3D environment.
- [IAVAudioEnvironmentNode.SetListenerPosition]
- [IAVAudioEnvironmentNode.ListenerAngularOrientation]: The listener’s angular orientation in the environment.
- [IAVAudioEnvironmentNode.SetListenerAngularOrientation]
- [IAVAudioEnvironmentNode.ListenerVectorOrientation]: The listener’s vector orientation in the environment.
- [IAVAudioEnvironmentNode.SetListenerVectorOrientation]
Getting Attenuation and Reverb Properties ¶
- [IAVAudioEnvironmentNode.DistanceAttenuationParameters]: The distance attenuation parameters for the environment.
- [IAVAudioEnvironmentNode.ReverbParameters]: The reverb parameters for the environment.
Getting and Setting Environment Properties ¶
- [IAVAudioEnvironmentNode.OutputVolume]: The mixer’s output volume.
- [IAVAudioEnvironmentNode.SetOutputVolume]
- [IAVAudioEnvironmentNode.OutputType]: The type of output hardware.
- [IAVAudioEnvironmentNode.SetOutputType]
Getting the Available Rendering Algorithms ¶
- [IAVAudioEnvironmentNode.ApplicableRenderingAlgorithms]: An array of rendering algorithms applicable to the environment node.
Getting the Head Tracking Status ¶
- [IAVAudioEnvironmentNode.ListenerHeadTrackingEnabled]: A Boolean value that indicates whether the listener orientation is automatically rotated based on head orientation.
- [IAVAudioEnvironmentNode.SetListenerHeadTrackingEnabled]
Getting the Input Bus ¶
- [IAVAudioEnvironmentNode.NextAvailableInputBus]: An unused input bus.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEnvironmentNode
type IAVAudioEnvironmentReverbParameters ¶
type IAVAudioEnvironmentReverbParameters interface {
objectivec.IObject
// A Boolean value that indicates whether reverberation is in an enabled state.
Enable() bool
SetEnable(value bool)
// Controls the amount of reverb, in decibels.
Level() float32
SetLevel(value float32)
// A filter that the system applies to the output.
FilterParameters() IAVAudioUnitEQFilterParameters
// Loads one of the reverbs factory presets.
LoadFactoryReverbPreset(preset AVAudioUnitReverbPreset)
}
An interface definition for the AVAudioEnvironmentReverbParameters class.
Enabling and Disabling Reverb ¶
- [IAVAudioEnvironmentReverbParameters.Enable]: A Boolean value that indicates whether reverberation is in an enabled state.
- [IAVAudioEnvironmentReverbParameters.SetEnable]
Getting and Setting Reverb Values ¶
- [IAVAudioEnvironmentReverbParameters.Level]: Controls the amount of reverb, in decibels.
- [IAVAudioEnvironmentReverbParameters.SetLevel]
- [IAVAudioEnvironmentReverbParameters.FilterParameters]: A filter that the system applies to the output.
- [IAVAudioEnvironmentReverbParameters.LoadFactoryReverbPreset]: Loads one of the reverbs factory presets.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioEnvironmentReverbParameters
type IAVAudioFile ¶
type IAVAudioFile interface {
objectivec.IObject
// Opens a file for reading using the standard, deinterleaved floating point format.
InitForReadingError(fileURL foundation.INSURL) (AVAudioFile, error)
// Opens a file for reading using the specified processing format.
InitForReadingCommonFormatInterleavedError(fileURL foundation.INSURL, format AVAudioCommonFormat, interleaved bool) (AVAudioFile, error)
// Opens a file for writing using the specified settings.
InitForWritingSettingsError(fileURL foundation.INSURL, settings foundation.INSDictionary) (AVAudioFile, error)
// Opens a file for writing using a specified processing format and settings.
InitForWritingSettingsCommonFormatInterleavedError(fileURL foundation.INSURL, settings foundation.INSDictionary, format AVAudioCommonFormat, interleaved bool) (AVAudioFile, error)
// Reads an entire audio buffer.
ReadIntoBufferError(buffer IAVAudioPCMBuffer) (bool, error)
// Reads a portion of an audio buffer using the number of frames you specify.
ReadIntoBufferFrameCountError(buffer IAVAudioPCMBuffer, frames AVAudioFrameCount) (bool, error)
// Writes an audio buffer sequentially.
WriteFromBufferError(buffer IAVAudioPCMBuffer) (bool, error)
// Closes the audio file.
Close()
// The location of the audio file.
Url() foundation.INSURL
// The on-disk format of the file.
FileFormat() IAVAudioFormat
// The processing format of the file.
ProcessingFormat() IAVAudioFormat
// The number of sample frames in the file.
Length() AVAudioFramePosition
// The position in the file where the next read or write operation occurs.
FramePosition() AVAudioFramePosition
SetFramePosition(value AVAudioFramePosition)
// A string that indicates the audio file type.
AVAudioFileTypeKey() string
// A Boolean value that indicates whether the file is open.
IsOpen() bool
}
An interface definition for the AVAudioFile class.
Creating an Audio File ¶
- [IAVAudioFile.InitForReadingError]: Opens a file for reading using the standard, deinterleaved floating point format.
- [IAVAudioFile.InitForReadingCommonFormatInterleavedError]: Opens a file for reading using the specified processing format.
- [IAVAudioFile.InitForWritingSettingsError]: Opens a file for writing using the specified settings.
- [IAVAudioFile.InitForWritingSettingsCommonFormatInterleavedError]: Opens a file for writing using a specified processing format and settings.
Reading and Writing the Audio Buffer ¶
- [IAVAudioFile.ReadIntoBufferError]: Reads an entire audio buffer.
- [IAVAudioFile.ReadIntoBufferFrameCountError]: Reads a portion of an audio buffer using the number of frames you specify.
- [IAVAudioFile.WriteFromBufferError]: Writes an audio buffer sequentially.
- [IAVAudioFile.Close]: Closes the audio file.
Getting Audio File Properties ¶
- [IAVAudioFile.Url]: The location of the audio file.
- [IAVAudioFile.FileFormat]: The on-disk format of the file.
- [IAVAudioFile.ProcessingFormat]: The processing format of the file.
- [IAVAudioFile.Length]: The number of sample frames in the file.
- [IAVAudioFile.FramePosition]: The position in the file where the next read or write operation occurs.
- [IAVAudioFile.SetFramePosition]
- [IAVAudioFile.AVAudioFileTypeKey]: A string that indicates the audio file type.
- [IAVAudioFile.IsOpen]: A Boolean value that indicates whether the file is open.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFile
type IAVAudioFormat ¶
type IAVAudioFormat interface {
objectivec.IObject
// Creates an audio format instance as a deinterleaved float with the specified sample rate and channel layout.
InitStandardFormatWithSampleRateChannelLayout(sampleRate float64, layout IAVAudioChannelLayout) AVAudioFormat
// Creates an audio format instance with the specified sample rate and channel count.
InitStandardFormatWithSampleRateChannels(sampleRate float64, channels AVAudioChannelCount) AVAudioFormat
// Creates an audio format instance.
InitWithCommonFormatSampleRateChannelsInterleaved(format AVAudioCommonFormat, sampleRate float64, channels AVAudioChannelCount, interleaved bool) AVAudioFormat
// Creates an audio format instance with the specified audio format, sample rate, interleaved state, and channel layout.
InitWithCommonFormatSampleRateInterleavedChannelLayout(format AVAudioCommonFormat, sampleRate float64, interleaved bool, layout IAVAudioChannelLayout) AVAudioFormat
// Creates an audio format instance using the specified settings dictionary.
InitWithSettings(settings foundation.INSDictionary) AVAudioFormat
// Creates an audio format instance from a stream description.
InitWithStreamDescription(asbd objectivec.IObject) AVAudioFormat
// Creates an audio format instance from a stream description and channel layout.
InitWithStreamDescriptionChannelLayout(asbd objectivec.IObject, layout IAVAudioChannelLayout) AVAudioFormat
// Creates an audio format instance from a Core Media audio format description.
InitWithCMAudioFormatDescription(formatDescription coremedia.CMFormatDescriptionRef) AVAudioFormat
// The audio format properties of a stream of audio data.
StreamDescription() objectivec.IObject
// The audio format sampling rate, in hertz.
SampleRate() float64
// The number of channels of audio data.
ChannelCount() AVAudioChannelCount
// The underlying audio channel layout.
ChannelLayout() IAVAudioChannelLayout
// The audio format description to use with Core Media APIs.
FormatDescription() coremedia.CMFormatDescriptionRef
// A Boolean value that indicates whether the samples mix into one stream.
Interleaved() bool
// A Boolean value that indicates whether the format is in a deinterleaved native-endian float state.
Standard() bool
// The common format identifier instance.
CommonFormat() AVAudioCommonFormat
// A dictionary that represents the format as a dictionary using audio setting keys.
Settings() foundation.INSDictionary
// An object that contains metadata that encoders and decoders require.
MagicCookie() foundation.INSData
SetMagicCookie(value foundation.INSData)
AVChannelLayoutKey() string
EncodeWithCoder(coder foundation.INSCoder)
}
An interface definition for the AVAudioFormat class.
Creating a New Audio Format Representation ¶
- [IAVAudioFormat.InitStandardFormatWithSampleRateChannelLayout]: Creates an audio format instance as a deinterleaved float with the specified sample rate and channel layout.
- [IAVAudioFormat.InitStandardFormatWithSampleRateChannels]: Creates an audio format instance with the specified sample rate and channel count.
- [IAVAudioFormat.InitWithCommonFormatSampleRateChannelsInterleaved]: Creates an audio format instance.
- [IAVAudioFormat.InitWithCommonFormatSampleRateInterleavedChannelLayout]: Creates an audio format instance with the specified audio format, sample rate, interleaved state, and channel layout.
- [IAVAudioFormat.InitWithSettings]: Creates an audio format instance using the specified settings dictionary.
- [IAVAudioFormat.InitWithStreamDescription]: Creates an audio format instance from a stream description.
- [IAVAudioFormat.InitWithStreamDescriptionChannelLayout]: Creates an audio format instance from a stream description and channel layout.
- [IAVAudioFormat.InitWithCMAudioFormatDescription]: Creates an audio format instance from a Core Media audio format description.
Getting the Audio Stream Description ¶
- [IAVAudioFormat.StreamDescription]: The audio format properties of a stream of audio data.
Getting Audio Format Values ¶
- [IAVAudioFormat.SampleRate]: The audio format sampling rate, in hertz.
- [IAVAudioFormat.ChannelCount]: The number of channels of audio data.
- [IAVAudioFormat.ChannelLayout]: The underlying audio channel layout.
- [IAVAudioFormat.FormatDescription]: The audio format description to use with Core Media APIs.
Determining the Audio Format ¶
- [IAVAudioFormat.Interleaved]: A Boolean value that indicates whether the samples mix into one stream.
- [IAVAudioFormat.Standard]: A Boolean value that indicates whether the format is in a deinterleaved native-endian float state.
- [IAVAudioFormat.CommonFormat]: The common format identifier instance.
- [IAVAudioFormat.Settings]: A dictionary that represents the format as a dictionary using audio setting keys.
- [IAVAudioFormat.MagicCookie]: An object that contains metadata that encoders and decoders require.
- [IAVAudioFormat.SetMagicCookie]
See: https://developer.apple.com/documentation/AVFAudio/AVAudioFormat
type IAVAudioIONode ¶
type IAVAudioIONode interface {
IAVAudioNode
// The node’s underlying audio unit, if any.
AudioUnit() IAVAudioUnit
// The presentation or hardware latency, applicable when rendering to or from an audio device.
PresentationLatency() float64
// Enables or disables voice processing on the I/O node.
SetVoiceProcessingEnabledError(enabled bool) (bool, error)
// A Boolean value that indicates whether voice processing is in an enabled state.
VoiceProcessingEnabled() bool
}
An interface definition for the AVAudioIONode class.
Getting the Audio Unit ¶
- [IAVAudioIONode.AudioUnit]: The node’s underlying audio unit, if any.
Getting the I/O Latency ¶
- [IAVAudioIONode.PresentationLatency]: The presentation or hardware latency, applicable when rendering to or from an audio device.
Getting and Setting the Voice Processing State ¶
- [IAVAudioIONode.SetVoiceProcessingEnabledError]: Enables or disables voice processing on the I/O node.
- [IAVAudioIONode.VoiceProcessingEnabled]: A Boolean value that indicates whether voice processing is in an enabled state.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioIONode
type IAVAudioInputNode ¶
type IAVAudioInputNode interface {
IAVAudioIONode
AVAudio3DMixing
AVAudioMixing
AVAudioStereoMixing
// Supplies the data through the input node to the engine while operating in the manual rendering mode.
SetManualRenderingInputPCMFormatInputBlock(format IAVAudioFormat, block AVAudioIONodeInputBlock) bool
// A Boolean that indicates whether the input of the voice processing unit is in a muted state.
VoiceProcessingInputMuted() bool
SetVoiceProcessingInputMuted(value bool)
// A Boolean that indicates whether the node bypasses all microphone uplink processing of the voice-processing unit.
VoiceProcessingBypassed() bool
SetVoiceProcessingBypassed(value bool)
// A Boolean that indicates whether automatic gain control on the processed microphone uplink signal is active.
VoiceProcessingAGCEnabled() bool
SetVoiceProcessingAGCEnabled(value bool)
// The ducking configuration of nonvoice audio.
VoiceProcessingOtherAudioDuckingConfiguration() AVAudioVoiceProcessingOtherAudioDuckingConfiguration
SetVoiceProcessingOtherAudioDuckingConfiguration(value AVAudioVoiceProcessingOtherAudioDuckingConfiguration)
SetMutedSpeechActivityEventListener(listenerBlock AVAudioVoiceProcessingSpeechActivityEventHandler) bool
}
An interface definition for the AVAudioInputNode class.
Manually Giving Data to an Audio Engine ¶
- [IAVAudioInputNode.SetManualRenderingInputPCMFormatInputBlock]: Supplies the data through the input node to the engine while operating in the manual rendering mode.
Getting and Setting Voice Processing Properties ¶
- [IAVAudioInputNode.VoiceProcessingInputMuted]: A Boolean that indicates whether the input of the voice processing unit is in a muted state.
- [IAVAudioInputNode.SetVoiceProcessingInputMuted]
- [IAVAudioInputNode.VoiceProcessingBypassed]: A Boolean that indicates whether the node bypasses all microphone uplink processing of the voice-processing unit.
- [IAVAudioInputNode.SetVoiceProcessingBypassed]
- [IAVAudioInputNode.VoiceProcessingAGCEnabled]: A Boolean that indicates whether automatic gain control on the processed microphone uplink signal is active.
- [IAVAudioInputNode.SetVoiceProcessingAGCEnabled]
- [IAVAudioInputNode.VoiceProcessingOtherAudioDuckingConfiguration]: The ducking configuration of nonvoice audio.
- [IAVAudioInputNode.SetVoiceProcessingOtherAudioDuckingConfiguration]
Handling Muted Speech Events ¶
- [IAVAudioInputNode.SetMutedSpeechActivityEventListener]
See: https://developer.apple.com/documentation/AVFAudio/AVAudioInputNode
type IAVAudioMixerNode ¶
type IAVAudioMixerNode interface {
IAVAudioNode
AVAudio3DMixing
AVAudioMixing
AVAudioStereoMixing
// The mixer’s output volume.
OutputVolume() float32
SetOutputVolume(value float32)
// An audio bus that isn’t in a connected state.
NextAvailableInputBus() AVAudioNodeBus
}
An interface definition for the AVAudioMixerNode class.
Getting and Setting the Mixer Volume ¶
- [IAVAudioMixerNode.OutputVolume]: The mixer’s output volume.
- [IAVAudioMixerNode.SetOutputVolume]
Getting an Input Bus ¶
- [IAVAudioMixerNode.NextAvailableInputBus]: An audio bus that isn’t in a connected state.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixerNode
type IAVAudioMixingDestination ¶
type IAVAudioMixingDestination interface {
objectivec.IObject
AVAudio3DMixing
AVAudioMixing
AVAudioStereoMixing
// The underlying mixer connection point.
ConnectionPoint() IAVAudioConnectionPoint
}
An interface definition for the AVAudioMixingDestination class.
Getting Mixing Destination Properties ¶
- [IAVAudioMixingDestination.ConnectionPoint]: The underlying mixer connection point.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioMixingDestination
type IAVAudioNode ¶
type IAVAudioNode interface {
objectivec.IObject
// Gets the input format for the bus you specify.
InputFormatForBus(bus AVAudioNodeBus) IAVAudioFormat
// Gets the name of the input bus you specify.
NameForInputBus(bus AVAudioNodeBus) string
// The number of input busses for the node.
NumberOfInputs() uint
// Retrieves the output format for the bus you specify.
OutputFormatForBus(bus AVAudioNodeBus) IAVAudioFormat
// Retrieves the name of the output bus you specify.
NameForOutputBus(bus AVAudioNodeBus) string
// The number of output busses for the node.
NumberOfOutputs() uint
// Installs an audio tap on a bus you specify to record, monitor, and observe the output of the node.
InstallTapOnBusBufferSizeFormatBlock(bus AVAudioNodeBus, bufferSize AVAudioFrameCount, format IAVAudioFormat, tapBlock AVAudioNodeTapBlock)
// Removes an audio tap on a bus you specify.
RemoveTapOnBus(bus AVAudioNodeBus)
// The audio engine that manages the node, if any.
Engine() IAVAudioEngine
// The most recent render time.
LastRenderTime() IAVAudioTime
// An audio unit object that wraps or underlies the implementation’s audio unit.
AUAudioUnit() objectivec.IObject
// The processing latency of the node, in seconds.
Latency() float64
// The maximum render pipeline latency downstream of the node, in seconds.
OutputPresentationLatency() float64
// Clears a unit’s previous processing state.
Reset()
}
An interface definition for the AVAudioNode class.
Configuring an Input Format Bus ¶
- [IAVAudioNode.InputFormatForBus]: Gets the input format for the bus you specify.
- [IAVAudioNode.NameForInputBus]: Gets the name of the input bus you specify.
- [IAVAudioNode.NumberOfInputs]: The number of input busses for the node.
Creating an Output Format Bus ¶
- [IAVAudioNode.OutputFormatForBus]: Retrieves the output format for the bus you specify.
- [IAVAudioNode.NameForOutputBus]: Retrieves the name of the output bus you specify.
- [IAVAudioNode.NumberOfOutputs]: The number of output busses for the node.
Installing and Removing an Audio Tap ¶
- [IAVAudioNode.InstallTapOnBusBufferSizeFormatBlock]: Installs an audio tap on a bus you specify to record, monitor, and observe the output of the node.
- [IAVAudioNode.RemoveTapOnBus]: Removes an audio tap on a bus you specify.
Getting the Audio Engine for the Node ¶
- [IAVAudioNode.Engine]: The audio engine that manages the node, if any.
Getting the Latest Node Render Time ¶
- [IAVAudioNode.LastRenderTime]: The most recent render time.
Getting Audio Node Properties ¶
- [IAVAudioNode.AUAudioUnit]: An audio unit object that wraps or underlies the implementation’s audio unit.
- [IAVAudioNode.Latency]: The processing latency of the node, in seconds.
- [IAVAudioNode.OutputPresentationLatency]: The maximum render pipeline latency downstream of the node, in seconds.
Resetting the Audio Node ¶
- [IAVAudioNode.Reset]: Clears a unit’s previous processing state.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioNode
type IAVAudioOutputNode ¶
type IAVAudioOutputNode interface {
IAVAudioIONode
// The intended spatial experience for this output node.
IntendedSpatialExperience() objectivec.IObject
SetIntendedSpatialExperience(value objectivec.IObject)
// The render format of the engine in manual rendering mode.
ManualRenderingFormat() IAVAudioFormat
SetManualRenderingFormat(value IAVAudioFormat)
}
An interface definition for the AVAudioOutputNode class.
Configuring the Spatial Audio experience ¶
- [IAVAudioOutputNode.IntendedSpatialExperience]: The intended spatial experience for this output node.
- [IAVAudioOutputNode.SetIntendedSpatialExperience]
See: https://developer.apple.com/documentation/AVFAudio/AVAudioOutputNode
type IAVAudioPCMBuffer ¶
type IAVAudioPCMBuffer interface {
IAVAudioBuffer
// Creates a PCM audio buffer instance for PCM audio data.
InitWithPCMFormatFrameCapacity(format IAVAudioFormat, frameCapacity AVAudioFrameCount) AVAudioPCMBuffer
// Creates a PCM audio buffer instance without copying samples, for PCM audio data, with a specified buffer list and a deallocator closure.
InitWithPCMFormatBufferListNoCopyDeallocator(format IAVAudioFormat, bufferList objectivec.IObject, deallocator constAudioBufferListHandler) AVAudioPCMBuffer
// The current number of valid sample frames in the buffer.
FrameLength() AVAudioFrameCount
SetFrameLength(value AVAudioFrameCount)
// The buffer’s audio samples as floating point values.
FloatChannelData() unsafe.Pointer
// The buffer’s capacity, in audio sample frames.
FrameCapacity() AVAudioFrameCount
// The buffer’s 16-bit integer audio samples.
Int16ChannelData() unsafe.Pointer
// The buffer’s 32-bit integer audio samples.
Int32ChannelData() unsafe.Pointer
// The buffer’s number of interleaved channels.
Stride() uint
}
An interface definition for the AVAudioPCMBuffer class.
Creating a PCM Audio Buffer ¶
- [IAVAudioPCMBuffer.InitWithPCMFormatFrameCapacity]: Creates a PCM audio buffer instance for PCM audio data.
- [IAVAudioPCMBuffer.InitWithPCMFormatBufferListNoCopyDeallocator]: Creates a PCM audio buffer instance without copying samples, for PCM audio data, with a specified buffer list and a deallocator closure.
Getting and Setting the Frame Length ¶
- [IAVAudioPCMBuffer.FrameLength]: The current number of valid sample frames in the buffer.
- [IAVAudioPCMBuffer.SetFrameLength]
Accessing PCM Buffer Data ¶
- [IAVAudioPCMBuffer.FloatChannelData]: The buffer’s audio samples as floating point values.
- [IAVAudioPCMBuffer.FrameCapacity]: The buffer’s capacity, in audio sample frames.
- [IAVAudioPCMBuffer.Int16ChannelData]: The buffer’s 16-bit integer audio samples.
- [IAVAudioPCMBuffer.Int32ChannelData]: The buffer’s 32-bit integer audio samples.
- [IAVAudioPCMBuffer.Stride]: The buffer’s number of interleaved channels.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPCMBuffer
type IAVAudioPlayer ¶
type IAVAudioPlayer interface {
objectivec.IObject
// Creates a player to play audio from a file.
InitWithContentsOfURLError(url foundation.INSURL) (AVAudioPlayer, error)
// Creates a player to play audio from a file of a particular type.
InitWithContentsOfURLFileTypeHintError(url foundation.INSURL, utiString string) (AVAudioPlayer, error)
// Creates a player to play in-memory audio data.
InitWithDataError(data foundation.INSData) (AVAudioPlayer, error)
// Creates a player to play in-memory audio data of a particular type.
InitWithDataFileTypeHintError(data foundation.INSData, utiString string) (AVAudioPlayer, error)
// Prepares the player for audio playback.
PrepareToPlay() bool
// Plays audio asynchronously.
Play() bool
// Plays audio asynchronously, starting at a specified point in the audio output device’s timeline.
PlayAtTime(time float64) bool
// Pauses audio playback.
Pause()
// Stops playback and undoes the setup the system requires for playback.
Stop()
// A Boolean value that indicates whether the player is currently playing audio.
Playing() bool
// The audio player’s volume relative to other audio output.
Volume() float32
SetVolume(value float32)
// Changes the audio player’s volume over a duration of time.
SetVolumeFadeDuration(volume float32, duration float64)
// The audio player’s stereo pan position.
Pan() float32
SetPan(value float32)
// A Boolean value that indicates whether you can adjust the playback rate of the audio player.
EnableRate() bool
SetEnableRate(value bool)
// The audio player’s playback rate.
Rate() float32
SetRate(value float32)
// The number of times the audio repeats playback.
NumberOfLoops() int
SetNumberOfLoops(value int)
// The current playback time, in seconds, within the audio timeline.
CurrentTime() float64
SetCurrentTime(value float64)
// The total duration, in seconds, of the player’s audio.
Duration() float64
// The intended spatial experience for this player.
IntendedSpatialExperience() objectivec.IObject
SetIntendedSpatialExperience(value objectivec.IObject)
// The number of audio channels in the player’s audio.
NumberOfChannels() uint
// A Boolean value that indicates whether the player is able to generate audio-level metering data.
MeteringEnabled() bool
SetMeteringEnabled(value bool)
// Refreshes the average and peak power values for all channels of an audio player.
UpdateMeters()
// Returns the average power, in decibels full-scale (dBFS), for an audio channel.
AveragePowerForChannel(channelNumber uint) float32
// Returns the peak power, in decibels full-scale (dBFS), for an audio channel.
PeakPowerForChannel(channelNumber uint) float32
// The delegate object for the audio player.
Delegate() AVAudioPlayerDelegate
SetDelegate(value AVAudioPlayerDelegate)
// The URL of the audio file.
Url() foundation.INSURL
// The audio data associated with the player.
Data() foundation.INSData
// The format of the player’s audio data.
Format() IAVAudioFormat
// A dictionary that provides information about the player’s audio data.
Settings() foundation.INSDictionary
// The unique identifier of the current audio player.
CurrentDevice() string
SetCurrentDevice(value string)
// The time value, in seconds, of the audio output device’s clock.
DeviceCurrentTime() float64
}
An interface definition for the AVAudioPlayer class.
Creating an audio player ¶
- [IAVAudioPlayer.InitWithContentsOfURLError]: Creates a player to play audio from a file.
- [IAVAudioPlayer.InitWithContentsOfURLFileTypeHintError]: Creates a player to play audio from a file of a particular type.
- [IAVAudioPlayer.InitWithDataError]: Creates a player to play in-memory audio data.
- [IAVAudioPlayer.InitWithDataFileTypeHintError]: Creates a player to play in-memory audio data of a particular type.
Controlling playback ¶
- [IAVAudioPlayer.PrepareToPlay]: Prepares the player for audio playback.
- [IAVAudioPlayer.Play]: Plays audio asynchronously.
- [IAVAudioPlayer.PlayAtTime]: Plays audio asynchronously, starting at a specified point in the audio output device’s timeline.
- [IAVAudioPlayer.Pause]: Pauses audio playback.
- [IAVAudioPlayer.Stop]: Stops playback and undoes the setup the system requires for playback.
- [IAVAudioPlayer.Playing]: A Boolean value that indicates whether the player is currently playing audio.
Configuring playback settings ¶
- [IAVAudioPlayer.Volume]: The audio player’s volume relative to other audio output.
- [IAVAudioPlayer.SetVolume]
- [IAVAudioPlayer.SetVolumeFadeDuration]: Changes the audio player’s volume over a duration of time.
- [IAVAudioPlayer.Pan]: The audio player’s stereo pan position.
- [IAVAudioPlayer.SetPan]
- [IAVAudioPlayer.EnableRate]: A Boolean value that indicates whether you can adjust the playback rate of the audio player.
- [IAVAudioPlayer.SetEnableRate]
- [IAVAudioPlayer.Rate]: The audio player’s playback rate.
- [IAVAudioPlayer.SetRate]
- [IAVAudioPlayer.NumberOfLoops]: The number of times the audio repeats playback.
- [IAVAudioPlayer.SetNumberOfLoops]
Accessing player timing ¶
- [IAVAudioPlayer.CurrentTime]: The current playback time, in seconds, within the audio timeline.
- [IAVAudioPlayer.SetCurrentTime]
- [IAVAudioPlayer.Duration]: The total duration, in seconds, of the player’s audio.
Configuring the Spatial Audio experience ¶
- [IAVAudioPlayer.IntendedSpatialExperience]: The intended spatial experience for this player.
- [IAVAudioPlayer.SetIntendedSpatialExperience]
Managing audio channels ¶
- [IAVAudioPlayer.NumberOfChannels]: The number of audio channels in the player’s audio.
Managing audio-level metering ¶
- [IAVAudioPlayer.MeteringEnabled]: A Boolean value that indicates whether the player is able to generate audio-level metering data.
- [IAVAudioPlayer.SetMeteringEnabled]
- [IAVAudioPlayer.UpdateMeters]: Refreshes the average and peak power values for all channels of an audio player.
- [IAVAudioPlayer.AveragePowerForChannel]: Returns the average power, in decibels full-scale (dBFS), for an audio channel.
- [IAVAudioPlayer.PeakPowerForChannel]: Returns the peak power, in decibels full-scale (dBFS), for an audio channel.
Responding to player events ¶
- [IAVAudioPlayer.Delegate]: The delegate object for the audio player.
- [IAVAudioPlayer.SetDelegate]
Inspecting the audio data ¶
- [IAVAudioPlayer.Url]: The URL of the audio file.
- [IAVAudioPlayer.Data]: The audio data associated with the player.
- [IAVAudioPlayer.Format]: The format of the player’s audio data.
- [IAVAudioPlayer.Settings]: A dictionary that provides information about the player’s audio data.
Accessing device information ¶
- [IAVAudioPlayer.CurrentDevice]: The unique identifier of the current audio player.
- [IAVAudioPlayer.SetCurrentDevice]
- [IAVAudioPlayer.DeviceCurrentTime]: The time value, in seconds, of the audio output device’s clock.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayer
type IAVAudioPlayerNode ¶
type IAVAudioPlayerNode interface {
IAVAudioNode
AVAudio3DMixing
AVAudioMixing
AVAudioStereoMixing
// Schedules the playing of an entire audio file.
ScheduleFileAtTimeCompletionHandler(file IAVAudioFile, when IAVAudioTime, completionHandler ErrorHandler)
// Schedules the playing of an entire audio file with a callback option you specify.
ScheduleFileAtTimeCompletionCallbackTypeCompletionHandler(file IAVAudioFile, when IAVAudioTime, callbackType AVAudioPlayerNodeCompletionCallbackType, completionHandler ErrorHandler)
// Schedules the playing of an audio file segment.
ScheduleSegmentStartingFrameFrameCountAtTimeCompletionHandler(file IAVAudioFile, startFrame AVAudioFramePosition, numberFrames AVAudioFrameCount, when IAVAudioTime, completionHandler ErrorHandler)
// Schedules the playing of an audio file segment with a callback option you specify.
ScheduleSegmentStartingFrameFrameCountAtTimeCompletionCallbackTypeCompletionHandler(file IAVAudioFile, startFrame AVAudioFramePosition, numberFrames AVAudioFrameCount, when IAVAudioTime, callbackType AVAudioPlayerNodeCompletionCallbackType, completionHandler ErrorHandler)
// Schedules the playing samples from an audio buffer at the time and playback options you specify.
ScheduleBufferAtTimeOptionsCompletionHandler(buffer IAVAudioPCMBuffer, when IAVAudioTime, options AVAudioPlayerNodeBufferOptions, completionHandler ErrorHandler)
// Schedules the playing samples from an audio buffer.
ScheduleBufferCompletionHandler(buffer IAVAudioPCMBuffer, completionHandler ErrorHandler)
// Schedules the playing samples from an audio buffer with the playback options you specify.
ScheduleBufferAtTimeOptionsCompletionCallbackTypeCompletionHandler(buffer IAVAudioPCMBuffer, when IAVAudioTime, options AVAudioPlayerNodeBufferOptions, callbackType AVAudioPlayerNodeCompletionCallbackType, completionHandler ErrorHandler)
// Schedules the playing samples from an audio buffer with the callback option you specify.
ScheduleBufferCompletionCallbackTypeCompletionHandler(buffer IAVAudioPCMBuffer, callbackType AVAudioPlayerNodeCompletionCallbackType, completionHandler ErrorHandler)
// Converts from player time to node time.
NodeTimeForPlayerTime(playerTime IAVAudioTime) IAVAudioTime
// Converts from node time to player time.
PlayerTimeForNodeTime(nodeTime IAVAudioTime) IAVAudioTime
// Prepares the file regions or buffers you schedule for playback.
PrepareWithFrameCount(frameCount AVAudioFrameCount)
// Starts or resumes playback immediately.
Play()
// Starts or resumes playback at a time you specify.
PlayAtTime(when IAVAudioTime)
// A Boolean value that indicates whether the player is playing.
Playing() bool
// Pauses the node’s playback.
Pause()
// Clears all of the node’s events you schedule and stops playback.
Stop()
}
An interface definition for the AVAudioPlayerNode class.
Scheduling Playback ¶
- [IAVAudioPlayerNode.ScheduleFileAtTimeCompletionHandler]: Schedules the playing of an entire audio file.
- [IAVAudioPlayerNode.ScheduleFileAtTimeCompletionCallbackTypeCompletionHandler]: Schedules the playing of an entire audio file with a callback option you specify.
- [IAVAudioPlayerNode.ScheduleSegmentStartingFrameFrameCountAtTimeCompletionHandler]: Schedules the playing of an audio file segment.
- [IAVAudioPlayerNode.ScheduleSegmentStartingFrameFrameCountAtTimeCompletionCallbackTypeCompletionHandler]: Schedules the playing of an audio file segment with a callback option you specify.
- [IAVAudioPlayerNode.ScheduleBufferAtTimeOptionsCompletionHandler]: Schedules the playing samples from an audio buffer at the time and playback options you specify.
- [IAVAudioPlayerNode.ScheduleBufferCompletionHandler]: Schedules the playing samples from an audio buffer.
- [IAVAudioPlayerNode.ScheduleBufferAtTimeOptionsCompletionCallbackTypeCompletionHandler]: Schedules the playing samples from an audio buffer with the playback options you specify.
- [IAVAudioPlayerNode.ScheduleBufferCompletionCallbackTypeCompletionHandler]: Schedules the playing samples from an audio buffer with the callback option you specify.
Converting Node and Player Times ¶
- [IAVAudioPlayerNode.NodeTimeForPlayerTime]: Converts from player time to node time.
- [IAVAudioPlayerNode.PlayerTimeForNodeTime]: Converts from node time to player time.
Controlling Playback ¶
- [IAVAudioPlayerNode.PrepareWithFrameCount]: Prepares the file regions or buffers you schedule for playback.
- [IAVAudioPlayerNode.Play]: Starts or resumes playback immediately.
- [IAVAudioPlayerNode.PlayAtTime]: Starts or resumes playback at a time you specify.
- [IAVAudioPlayerNode.Playing]: A Boolean value that indicates whether the player is playing.
- [IAVAudioPlayerNode.Pause]: Pauses the node’s playback.
- [IAVAudioPlayerNode.Stop]: Clears all of the node’s events you schedule and stops playback.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioPlayerNode
type IAVAudioRecorder ¶
type IAVAudioRecorder interface {
objectivec.IObject
// Creates an audio recorder with settings.
InitWithURLSettingsError(url foundation.INSURL, settings foundation.INSDictionary) (AVAudioRecorder, error)
// Creates an audio recorder with an audio format.
InitWithURLFormatError(url foundation.INSURL, format IAVAudioFormat) (AVAudioRecorder, error)
// Creates an audio file and prepares the system for recording.
PrepareToRecord() bool
// Records audio starting at a specific time.
RecordAtTime(time float64) bool
// Records audio for the indicated duration of time.
RecordForDuration(duration float64) bool
// Records audio starting at a specific time for the indicated duration.
RecordAtTimeForDuration(time float64, duration float64) bool
// Pauses an audio recording.
Pause()
// Stops recording and closes the audio file.
Stop()
// A Boolean value that indicates whether the audio recorder is recording.
Recording() bool
// Deletes a recorded audio file.
DeleteRecording() bool
// The time, in seconds, since the beginning of the recording.
CurrentTime() float64
// The time, in seconds, of the host audio device.
DeviceCurrentTime() float64
// A Boolean value that indicates whether you’ve enabled the recorder to generate audio-level metering data.
MeteringEnabled() bool
SetMeteringEnabled(value bool)
// Refreshes the average and peak power values for all channels of an audio recorder.
UpdateMeters()
// Returns the average power, in decibels full-scale (dBFS), for an audio channel.
AveragePowerForChannel(channelNumber uint) float32
// Returns the peak power, in decibels full-scale (dBFS), for an audio channel.
PeakPowerForChannel(channelNumber uint) float32
// The delegate object for the audio recorder.
Delegate() AVAudioRecorderDelegate
SetDelegate(value AVAudioRecorderDelegate)
// The URL to which the recorder writes its data.
Url() foundation.INSURL
// The format of the recorded audio.
Format() IAVAudioFormat
// The settings that describe the format of the recorded audio.
Settings() foundation.INSDictionary
// The category for recording (input) and playback (output) of audio, such as for a Voice over Internet Protocol (VoIP) app.
PlayAndRecord() objc.ID
// The category for recording audio while also silencing playback audio.
Record() objc.ID
}
An interface definition for the AVAudioRecorder class.
Creating an audio recorder ¶
- [IAVAudioRecorder.InitWithURLSettingsError]: Creates an audio recorder with settings.
- [IAVAudioRecorder.InitWithURLFormatError]: Creates an audio recorder with an audio format.
Controlling recording ¶
- [IAVAudioRecorder.PrepareToRecord]: Creates an audio file and prepares the system for recording.
- [IAVAudioRecorder.RecordAtTime]: Records audio starting at a specific time.
- [IAVAudioRecorder.RecordForDuration]: Records audio for the indicated duration of time.
- [IAVAudioRecorder.RecordAtTimeForDuration]: Records audio starting at a specific time for the indicated duration.
- [IAVAudioRecorder.Pause]: Pauses an audio recording.
- [IAVAudioRecorder.Stop]: Stops recording and closes the audio file.
- [IAVAudioRecorder.Recording]: A Boolean value that indicates whether the audio recorder is recording.
- [IAVAudioRecorder.DeleteRecording]: Deletes a recorded audio file.
Accessing recorder timing ¶
- [IAVAudioRecorder.CurrentTime]: The time, in seconds, since the beginning of the recording.
- [IAVAudioRecorder.DeviceCurrentTime]: The time, in seconds, of the host audio device.
Managing audio-level metering ¶
- [IAVAudioRecorder.MeteringEnabled]: A Boolean value that indicates whether you’ve enabled the recorder to generate audio-level metering data.
- [IAVAudioRecorder.SetMeteringEnabled]
- [IAVAudioRecorder.UpdateMeters]: Refreshes the average and peak power values for all channels of an audio recorder.
- [IAVAudioRecorder.AveragePowerForChannel]: Returns the average power, in decibels full-scale (dBFS), for an audio channel.
- [IAVAudioRecorder.PeakPowerForChannel]: Returns the peak power, in decibels full-scale (dBFS), for an audio channel.
Responding to recorder events ¶
- [IAVAudioRecorder.Delegate]: The delegate object for the audio recorder.
- [IAVAudioRecorder.SetDelegate]
Inspecting the audio data ¶
- [IAVAudioRecorder.Url]: The URL to which the recorder writes its data.
- [IAVAudioRecorder.Format]: The format of the recorded audio.
- [IAVAudioRecorder.Settings]: The settings that describe the format of the recorded audio.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRecorder
type IAVAudioRoutingArbiter ¶
type IAVAudioRoutingArbiter interface {
objectivec.IObject
// Begins routing arbitration to take ownership of a nearby Bluetooth audio route.
BeginArbitrationWithCategoryCompletionHandler(category AVAudioRoutingArbitrationCategory, handler BoolErrorHandler)
// Stops an app’s participation in audio routing arbitration.
LeaveArbitration()
}
An interface definition for the AVAudioRoutingArbiter class.
Participating in AirPods Automatic Switching ¶
- [IAVAudioRoutingArbiter.BeginArbitrationWithCategoryCompletionHandler]: Begins routing arbitration to take ownership of a nearby Bluetooth audio route.
- [IAVAudioRoutingArbiter.LeaveArbitration]: Stops an app’s participation in audio routing arbitration.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioRoutingArbiter
type IAVAudioSequencer ¶
type IAVAudioSequencer interface {
objectivec.IObject
// Creates an audio sequencer that the framework attaches to an audio engine instance.
InitWithAudioEngine(engine IAVAudioEngine) AVAudioSequencer
// Creates and writes a MIDI file from the events in the sequence.
WriteToURLSMPTEResolutionReplaceExistingError(fileURL foundation.INSURL, resolution int, replace bool) (bool, error)
// Creates a new music track and appends it to the sequencer’s list.
CreateAndAppendTrack() IAVMusicTrack
// Reverses the order of all events in all music tracks, including the tempo track.
ReverseEvents()
// Removes the music track from the sequencer.
RemoveTrack(track IAVMusicTrack) bool
// Parses the data and adds its events to the sequence.
LoadFromDataOptionsError(data foundation.INSData, options AVMusicSequenceLoadOptions) (bool, error)
// Loads the file the URL references and adds the events to the sequence.
LoadFromURLOptionsError(fileURL foundation.INSURL, options AVMusicSequenceLoadOptions) (bool, error)
// Gets ready to play the sequence by prerolling all events.
PrepareToPlay()
// Starts the sequencer’s player.
StartAndReturnError() (bool, error)
// Stops the sequencer’s player.
Stop()
// Gets the host time the sequence plays at the specified position.
HostTimeForBeatsError(inBeats AVMusicTimeStamp) (uint64, error)
// Gets the time for the specified beat position (timestamp) in the track, in seconds.
SecondsForBeats(beats AVMusicTimeStamp) float64
// Gets the beat the system plays at the specified host time.
BeatsForHostTimeError(inHostTime uint64) (AVMusicTimeStamp, error)
// Gets the beat position (timestamp) for the specified time in the track.
BeatsForSeconds(seconds float64) AVMusicTimeStamp
// A timestamp you use to access all events in a music track through a beat range.
AVMusicTimeStampEndOfTrack() float64
SetAVMusicTimeStampEndOfTrack(value float64)
// Adds a callback that the sequencer calls each time it encounters a user event during playback.
SetUserCallback(userCallback AVAudioSequencerUserCallback)
// A Boolean value that indicates whether the sequencer’s player is in a playing state.
Playing() bool
// The playback rate of the sequencer’s player.
Rate() float32
SetRate(value float32)
// An array that contains all the tracks in the sequence.
Tracks() []AVMusicTrack
// The current playback position, in beats.
CurrentPositionInBeats() float64
SetCurrentPositionInBeats(value float64)
// The current playback position, in seconds.
CurrentPositionInSeconds() float64
SetCurrentPositionInSeconds(value float64)
// The track that contains tempo information about the sequence.
TempoTrack() IAVMusicTrack
// A dictionary that contains metadata from a sequence.
UserInfo() foundation.INSDictionary
// Gets a data object that contains the events from the sequence.
DataWithSMPTEResolutionError(SMPTEResolution int) (foundation.INSData, error)
}
An interface definition for the AVAudioSequencer class.
Creating an Audio Sequencer ¶
- [IAVAudioSequencer.InitWithAudioEngine]: Creates an audio sequencer that the framework attaches to an audio engine instance.
Writing to a MIDI File ¶
- [IAVAudioSequencer.WriteToURLSMPTEResolutionReplaceExistingError]: Creates and writes a MIDI file from the events in the sequence.
Handling Music Tracks ¶
- [IAVAudioSequencer.CreateAndAppendTrack]: Creates a new music track and appends it to the sequencer’s list.
- [IAVAudioSequencer.ReverseEvents]: Reverses the order of all events in all music tracks, including the tempo track.
- [IAVAudioSequencer.RemoveTrack]: Removes the music track from the sequencer.
Managing Sequence Load Options ¶
- [IAVAudioSequencer.LoadFromDataOptionsError]: Parses the data and adds its events to the sequence.
- [IAVAudioSequencer.LoadFromURLOptionsError]: Loads the file the URL references and adds the events to the sequence.
Operating an Audio Sequencer ¶
- [IAVAudioSequencer.PrepareToPlay]: Gets ready to play the sequence by prerolling all events.
- [IAVAudioSequencer.StartAndReturnError]: Starts the sequencer’s player.
- [IAVAudioSequencer.Stop]: Stops the sequencer’s player.
Managing Time Stamps ¶
- [IAVAudioSequencer.HostTimeForBeatsError]: Gets the host time the sequence plays at the specified position.
- [IAVAudioSequencer.SecondsForBeats]: Gets the time for the specified beat position (timestamp) in the track, in seconds.
Handling Beat Range ¶
- [IAVAudioSequencer.BeatsForHostTimeError]: Gets the beat the system plays at the specified host time.
- [IAVAudioSequencer.BeatsForSeconds]: Gets the beat position (timestamp) for the specified time in the track.
- [IAVAudioSequencer.AVMusicTimeStampEndOfTrack]: A timestamp you use to access all events in a music track through a beat range.
- [IAVAudioSequencer.SetAVMusicTimeStampEndOfTrack]
Setting the User Callback ¶
- [IAVAudioSequencer.SetUserCallback]: Adds a callback that the sequencer calls each time it encounters a user event during playback.
Getting Sequence Properties ¶
- [IAVAudioSequencer.Playing]: A Boolean value that indicates whether the sequencer’s player is in a playing state.
- [IAVAudioSequencer.Rate]: The playback rate of the sequencer’s player.
- [IAVAudioSequencer.SetRate]
- [IAVAudioSequencer.Tracks]: An array that contains all the tracks in the sequence.
- [IAVAudioSequencer.CurrentPositionInBeats]: The current playback position, in beats.
- [IAVAudioSequencer.SetCurrentPositionInBeats]
- [IAVAudioSequencer.CurrentPositionInSeconds]: The current playback position, in seconds.
- [IAVAudioSequencer.SetCurrentPositionInSeconds]
- [IAVAudioSequencer.TempoTrack]: The track that contains tempo information about the sequence.
- [IAVAudioSequencer.UserInfo]: A dictionary that contains metadata from a sequence.
- [IAVAudioSequencer.DataWithSMPTEResolutionError]: Gets a data object that contains the events from the sequence.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSequencer
type IAVAudioSessionCapability ¶
type IAVAudioSessionCapability interface {
objectivec.IObject
// A Boolean value that indicates whether the capability is enabled.
Enabled() bool
// A Boolean value that indicates whether the capability is supported.
Supported() bool
// An optional port extension that describes capabilities relevant to Bluetooth microphone ports.
BluetoothMicrophoneExtension() objc.ID
SetBluetoothMicrophoneExtension(value objc.ID)
}
An interface definition for the AVAudioSessionCapability class.
Inspecting a capability ¶
- [IAVAudioSessionCapability.Enabled]: A Boolean value that indicates whether the capability is enabled.
- [IAVAudioSessionCapability.Supported]: A Boolean value that indicates whether the capability is supported.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSessionCapability
type IAVAudioSinkNode ¶
type IAVAudioSinkNode interface {
IAVAudioNode
// Creates an audio sink node with a block that receives audio data.
InitWithReceiverBlock(block AVAudioSinkNodeReceiverBlock) AVAudioSinkNode
}
An interface definition for the AVAudioSinkNode class.
Creating an Audio Sink Node ¶
- [IAVAudioSinkNode.InitWithReceiverBlock]: Creates an audio sink node with a block that receives audio data.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSinkNode
type IAVAudioSourceNode ¶
type IAVAudioSourceNode interface {
IAVAudioNode
AVAudio3DMixing
AVAudioMixing
AVAudioStereoMixing
// Creates an audio source node with a block that supplies audio data.
InitWithRenderBlock(block AVAudioSourceNodeRenderBlock) AVAudioSourceNode
// Creates an audio source node with the audio format and a block that supplies audio data.
InitWithFormatRenderBlock(format IAVAudioFormat, block AVAudioSourceNodeRenderBlock) AVAudioSourceNode
}
An interface definition for the AVAudioSourceNode class.
Creating an Audio Source Node ¶
- [IAVAudioSourceNode.InitWithRenderBlock]: Creates an audio source node with a block that supplies audio data.
- [IAVAudioSourceNode.InitWithFormatRenderBlock]: Creates an audio source node with the audio format and a block that supplies audio data.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioSourceNode
type IAVAudioTime ¶
type IAVAudioTime interface {
objectivec.IObject
// Creates an audio time object with the specified timestamp and sample rate.
InitWithAudioTimeStampSampleRate(ts objectivec.IObject, sampleRate float64) AVAudioTime
// Creates an audio time object with the specified host time.
InitWithHostTime(hostTime uint64) AVAudioTime
// Creates an audio time object with the specified host time, sample time, and sample rate.
InitWithHostTimeSampleTimeAtRate(hostTime uint64, sampleTime AVAudioFramePosition, sampleRate float64) AVAudioTime
// Creates an audio time object with the specified timestamp and sample rate.
InitWithSampleTimeAtRate(sampleTime AVAudioFramePosition, sampleRate float64) AVAudioTime
// Creates an audio time object by converting between host time and sample time.
ExtrapolateTimeFromAnchor(anchorTime IAVAudioTime) IAVAudioTime
// The host time.
HostTime() uint64
// A Boolean value that indicates whether the host time value is valid.
HostTimeValid() bool
// The sampling rate that the sample time property expresses.
SampleRate() float64
// The time as a number of audio samples that the current audio device tracks.
SampleTime() AVAudioFramePosition
// A Boolean value that indicates whether the sample time and sample rate properties are in a valid state.
SampleTimeValid() bool
// The time as an audio timestamp.
AudioTimeStamp() objectivec.IObject
}
An interface definition for the AVAudioTime class.
Creating an Audio Time Instance ¶
- [IAVAudioTime.InitWithAudioTimeStampSampleRate]: Creates an audio time object with the specified timestamp and sample rate.
- [IAVAudioTime.InitWithHostTime]: Creates an audio time object with the specified host time.
- [IAVAudioTime.InitWithHostTimeSampleTimeAtRate]: Creates an audio time object with the specified host time, sample time, and sample rate.
- [IAVAudioTime.InitWithSampleTimeAtRate]: Creates an audio time object with the specified timestamp and sample rate.
- [IAVAudioTime.ExtrapolateTimeFromAnchor]: Creates an audio time object by converting between host time and sample time.
Manipulating Host Time ¶
- [IAVAudioTime.HostTime]: The host time.
- [IAVAudioTime.HostTimeValid]: A Boolean value that indicates whether the host time value is valid.
Getting Sample Rate Information ¶
- [IAVAudioTime.SampleRate]: The sampling rate that the sample time property expresses.
- [IAVAudioTime.SampleTime]: The time as a number of audio samples that the current audio device tracks.
- [IAVAudioTime.SampleTimeValid]: A Boolean value that indicates whether the sample time and sample rate properties are in a valid state.
Getting the Core Audio Time Stamp ¶
- [IAVAudioTime.AudioTimeStamp]: The time as an audio timestamp.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioTime
type IAVAudioUnit ¶
type IAVAudioUnit interface {
IAVAudioNode
// The underlying Core Audio audio unit.
AudioUnit() IAVAudioUnit
// Loads an audio unit using a specified preset.
LoadAudioUnitPresetAtURLError(url foundation.INSURL) (bool, error)
// The audio component description that represents the underlying Core Audio audio unit.
AudioComponentDescription() objectivec.IObject
// The name of the manufacturer of the audio unit.
ManufacturerName() string
// The name of the audio unit.
Name() string
// The version number of the audio unit.
Version() uint
}
An interface definition for the AVAudioUnit class.
Getting the Core Audio audio unit ¶
- [IAVAudioUnit.AudioUnit]: The underlying Core Audio audio unit.
Loading an audio preset file ¶
- [IAVAudioUnit.LoadAudioUnitPresetAtURLError]: Loads an audio unit using a specified preset.
Getting audio unit values ¶
- [IAVAudioUnit.AudioComponentDescription]: The audio component description that represents the underlying Core Audio audio unit.
- [IAVAudioUnit.ManufacturerName]: The name of the manufacturer of the audio unit.
- [IAVAudioUnit.Name]: The name of the audio unit.
- [IAVAudioUnit.Version]: The version number of the audio unit.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnit
type IAVAudioUnitComponent ¶
type IAVAudioUnitComponent interface {
objectivec.IObject
// The underlying audio component.
AudioComponent() objectivec.IObject
// The audio component description.
AudioComponentDescription() objectivec.IObject
// An array of architectures that the audio unit supports.
AvailableArchitectures() []foundation.NSNumber
// The audio unit component’s configuration dictionary.
ConfigurationDictionary() foundation.INSDictionary
// A Boolean value that indicates whether the audio unit component has a custom view.
HasCustomView() bool
// A Boolean value that indicates whether the audio unit component has MIDI input.
HasMIDIInput() bool
// A Boolean value that indicates whether the audio unit component has MIDI output.
HasMIDIOutput() bool
// The name of the manufacturer of the audio unit component.
ManufacturerName() string
// The name of the audio unit component.
Name() string
// A Boolean value that indicates whether the audio unit component passes the validation tests.
PassesAUVal() bool
// A Boolean value that indicates whether the audio unit component is safe for sandboxing.
SandboxSafe() bool
// Gets a Boolean value that indicates whether the audio unit component supports the specified number of input and output channels.
SupportsNumberInputChannelsOutputChannels(numInputChannels int, numOutputChannels int) bool
// The audio unit component type.
TypeName() string
// The audio unit component version number.
Version() uint
// A string that represents the audio unit component version number.
VersionString() string
// The URL of an icon that represents the audio unit component.
IconURL() foundation.INSURL
// An icon that represents the component.
Icon() objc.ID
// The localized type name of the component.
LocalizedTypeName() string
// An array of tag names for the audio unit component.
AllTagNames() []string
// An array of tags the user creates.
UserTagNames() []string
SetUserTagNames(value []string)
// The audio unit manufacturer is Apple.
AVAudioUnitManufacturerNameApple() string
// An audio unit type that represents an output.
AVAudioUnitTypeOutput() string
// An audio unit type that represents a music device.
AVAudioUnitTypeMusicDevice() string
// An audio unit type that represents a music effect.
AVAudioUnitTypeMusicEffect() string
// An audio unit type that represents a format converter.
AVAudioUnitTypeFormatConverter() string
// An audio unit type that represents an effect.
AVAudioUnitTypeEffect() string
// An audio unit type that represents a mixer.
AVAudioUnitTypeMixer() string
// An audio unit type that represents a panner.
AVAudioUnitTypePanner() string
// An audio unit type that represents a generator.
AVAudioUnitTypeGenerator() string
// An audio unit type that represents an offline effect.
AVAudioUnitTypeOfflineEffect() string
// An audio unit type that represents a MIDI processor.
AVAudioUnitTypeMIDIProcessor() string
}
An interface definition for the AVAudioUnitComponent class.
Getting the audio unit component’s audio unit ¶
- [IAVAudioUnitComponent.AudioComponent]: The underlying audio component.
Getting audio unit component information ¶
- [IAVAudioUnitComponent.AudioComponentDescription]: The audio component description.
- [IAVAudioUnitComponent.AvailableArchitectures]: An array of architectures that the audio unit supports.
- [IAVAudioUnitComponent.ConfigurationDictionary]: The audio unit component’s configuration dictionary.
- [IAVAudioUnitComponent.HasCustomView]: A Boolean value that indicates whether the audio unit component has a custom view.
- [IAVAudioUnitComponent.HasMIDIInput]: A Boolean value that indicates whether the audio unit component has MIDI input.
- [IAVAudioUnitComponent.HasMIDIOutput]: A Boolean value that indicates whether the audio unit component has MIDI output.
- [IAVAudioUnitComponent.ManufacturerName]: The name of the manufacturer of the audio unit component.
- [IAVAudioUnitComponent.Name]: The name of the audio unit component.
- [IAVAudioUnitComponent.PassesAUVal]: A Boolean value that indicates whether the audio unit component passes the validation tests.
- [IAVAudioUnitComponent.SandboxSafe]: A Boolean value that indicates whether the audio unit component is safe for sandboxing.
- [IAVAudioUnitComponent.SupportsNumberInputChannelsOutputChannels]: Gets a Boolean value that indicates whether the audio unit component supports the specified number of input and output channels.
- [IAVAudioUnitComponent.TypeName]: The audio unit component type.
- [IAVAudioUnitComponent.Version]: The audio unit component version number.
- [IAVAudioUnitComponent.VersionString]: A string that represents the audio unit component version number.
Getting audio unit component tags ¶
- [IAVAudioUnitComponent.IconURL]: The URL of an icon that represents the audio unit component.
- [IAVAudioUnitComponent.Icon]: An icon that represents the component.
- [IAVAudioUnitComponent.LocalizedTypeName]: The localized type name of the component.
- [IAVAudioUnitComponent.AllTagNames]: An array of tag names for the audio unit component.
- [IAVAudioUnitComponent.UserTagNames]: An array of tags the user creates.
- [IAVAudioUnitComponent.SetUserTagNames]
Audio unit manufacturer names ¶
- [IAVAudioUnitComponent.AVAudioUnitManufacturerNameApple]: The audio unit manufacturer is Apple.
Audio unit types ¶
- [IAVAudioUnitComponent.AVAudioUnitTypeOutput]: An audio unit type that represents an output.
- [IAVAudioUnitComponent.AVAudioUnitTypeMusicDevice]: An audio unit type that represents a music device.
- [IAVAudioUnitComponent.AVAudioUnitTypeMusicEffect]: An audio unit type that represents a music effect.
- [IAVAudioUnitComponent.AVAudioUnitTypeFormatConverter]: An audio unit type that represents a format converter.
- [IAVAudioUnitComponent.AVAudioUnitTypeEffect]: An audio unit type that represents an effect.
- [IAVAudioUnitComponent.AVAudioUnitTypeMixer]: An audio unit type that represents a mixer.
- [IAVAudioUnitComponent.AVAudioUnitTypePanner]: An audio unit type that represents a panner.
- [IAVAudioUnitComponent.AVAudioUnitTypeGenerator]: An audio unit type that represents a generator.
- [IAVAudioUnitComponent.AVAudioUnitTypeOfflineEffect]: An audio unit type that represents an offline effect.
- [IAVAudioUnitComponent.AVAudioUnitTypeMIDIProcessor]: An audio unit type that represents a MIDI processor.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponent
type IAVAudioUnitComponentManager ¶
type IAVAudioUnitComponentManager interface {
objectivec.IObject
// Gets an array of audio component objects that match the description.
ComponentsMatchingDescription(desc objectivec.IObject) []AVAudioUnitComponent
// Gets an array of audio component objects that match the search predicate.
ComponentsMatchingPredicate(predicate foundation.INSPredicate) []AVAudioUnitComponent
// Gets an array of audio components that pass the block method.
ComponentsPassingTest(testHandler AVAudioUnitComponentHandler) []AVAudioUnitComponent
// An array of the localized standard system tags the audio units define.
StandardLocalizedTagNames() []string
// An array of all tags the audio unit associates with the current user, and the system tags the audio units define.
TagNames() []string
}
An interface definition for the AVAudioUnitComponentManager class.
Getting matching audio components ¶
- [IAVAudioUnitComponentManager.ComponentsMatchingDescription]: Gets an array of audio component objects that match the description.
- [IAVAudioUnitComponentManager.ComponentsMatchingPredicate]: Gets an array of audio component objects that match the search predicate.
- [IAVAudioUnitComponentManager.ComponentsPassingTest]: Gets an array of audio components that pass the block method.
Getting audio unit tags ¶
- [IAVAudioUnitComponentManager.StandardLocalizedTagNames]: An array of the localized standard system tags the audio units define.
- [IAVAudioUnitComponentManager.TagNames]: An array of all tags the audio unit associates with the current user, and the system tags the audio units define.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitComponentManager
type IAVAudioUnitDelay ¶
type IAVAudioUnitDelay interface {
IAVAudioUnitEffect
// The time for the input signal to reach the output.
DelayTime() float64
SetDelayTime(value float64)
// The amount of the output signal that feeds back into the delay line.
Feedback() float32
SetFeedback(value float32)
// The cutoff frequency above which high frequency content rolls off, in hertz.
LowPassCutoff() float32
SetLowPassCutoff(value float32)
// The blend of the wet and dry signals.
WetDryMix() float32
SetWetDryMix(value float32)
}
An interface definition for the AVAudioUnitDelay class.
Getting and setting the delay values ¶
- [IAVAudioUnitDelay.DelayTime]: The time for the input signal to reach the output.
- [IAVAudioUnitDelay.SetDelayTime]
- [IAVAudioUnitDelay.Feedback]: The amount of the output signal that feeds back into the delay line.
- [IAVAudioUnitDelay.SetFeedback]
- [IAVAudioUnitDelay.LowPassCutoff]: The cutoff frequency above which high frequency content rolls off, in hertz.
- [IAVAudioUnitDelay.SetLowPassCutoff]
- [IAVAudioUnitDelay.WetDryMix]: The blend of the wet and dry signals.
- [IAVAudioUnitDelay.SetWetDryMix]
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitDelay
type IAVAudioUnitDistortion ¶
type IAVAudioUnitDistortion interface {
IAVAudioUnitEffect
// Configures the audio distortion unit by loading a distortion preset.
LoadFactoryPreset(preset AVAudioUnitDistortionPreset)
// The gain that the audio unit applies to the signal before distortion, in decibels.
PreGain() float32
SetPreGain(value float32)
// The blend of the distorted and dry signals.
WetDryMix() float32
SetWetDryMix(value float32)
}
An interface definition for the AVAudioUnitDistortion class.
Configuring the distortion ¶
- [IAVAudioUnitDistortion.LoadFactoryPreset]: Configures the audio distortion unit by loading a distortion preset.
Getting and setting the distortion values ¶
- [IAVAudioUnitDistortion.PreGain]: The gain that the audio unit applies to the signal before distortion, in decibels.
- [IAVAudioUnitDistortion.SetPreGain]
- [IAVAudioUnitDistortion.WetDryMix]: The blend of the distorted and dry signals.
- [IAVAudioUnitDistortion.SetWetDryMix]
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitDistortion
type IAVAudioUnitEQ ¶
type IAVAudioUnitEQ interface {
IAVAudioUnitEffect
// Creates an audio unit equalizer object with the specified number of bands.
InitWithNumberOfBands(numberOfBands uint) AVAudioUnitEQ
// An array of equalizer filter parameters.
Bands() []AVAudioUnitEQFilterParameters
// The overall gain adjustment that the audio unit applies to the signal, in decibels.
GlobalGain() float32
SetGlobalGain(value float32)
}
An interface definition for the AVAudioUnitEQ class.
Creating an equalizer ¶
- [IAVAudioUnitEQ.InitWithNumberOfBands]: Creates an audio unit equalizer object with the specified number of bands.
Getting and setting the equalizer values ¶
- [IAVAudioUnitEQ.Bands]: An array of equalizer filter parameters.
- [IAVAudioUnitEQ.GlobalGain]: The overall gain adjustment that the audio unit applies to the signal, in decibels.
- [IAVAudioUnitEQ.SetGlobalGain]
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEQ
type IAVAudioUnitEQFilterParameters ¶
type IAVAudioUnitEQFilterParameters interface {
objectivec.IObject
// The bandwidth of the equalizer filter, in octaves.
Bandwidth() float32
SetBandwidth(value float32)
// The bypass state of the equalizer filter band.
Bypass() bool
SetBypass(value bool)
// The equalizer filter type.
FilterType() AVAudioUnitEQFilterType
SetFilterType(value AVAudioUnitEQFilterType)
// The frequency of the equalizer filter, in hertz.
Frequency() float32
SetFrequency(value float32)
// The gain of the equalizer filter, in decibels.
Gain() float32
SetGain(value float32)
// An array of equalizer filter parameters.
Bands() IAVAudioUnitEQFilterParameters
SetBands(value IAVAudioUnitEQFilterParameters)
// The overall gain adjustment that the audio unit applies to the signal, in decibels.
GlobalGain() float32
SetGlobalGain(value float32)
}
An interface definition for the AVAudioUnitEQFilterParameters class.
Getting and Setting Equalizer Filter Parameters ¶
- [IAVAudioUnitEQFilterParameters.Bandwidth]: The bandwidth of the equalizer filter, in octaves.
- [IAVAudioUnitEQFilterParameters.SetBandwidth]
- [IAVAudioUnitEQFilterParameters.Bypass]: The bypass state of the equalizer filter band.
- [IAVAudioUnitEQFilterParameters.SetBypass]
- [IAVAudioUnitEQFilterParameters.FilterType]: The equalizer filter type.
- [IAVAudioUnitEQFilterParameters.SetFilterType]
- [IAVAudioUnitEQFilterParameters.Frequency]: The frequency of the equalizer filter, in hertz.
- [IAVAudioUnitEQFilterParameters.SetFrequency]
- [IAVAudioUnitEQFilterParameters.Gain]: The gain of the equalizer filter, in decibels.
- [IAVAudioUnitEQFilterParameters.SetGain]
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEQFilterParameters
type IAVAudioUnitEffect ¶
type IAVAudioUnitEffect interface {
IAVAudioUnit
// Creates an audio unit effect object with the specified description.
InitWithAudioComponentDescription(audioComponentDescription objectivec.IObject) AVAudioUnitEffect
// The bypass state of the audio unit.
Bypass() bool
SetBypass(value bool)
}
An interface definition for the AVAudioUnitEffect class.
Creating an audio effect ¶
- [IAVAudioUnitEffect.InitWithAudioComponentDescription]: Creates an audio unit effect object with the specified description.
Getting the bypass state ¶
- [IAVAudioUnitEffect.Bypass]: The bypass state of the audio unit.
- [IAVAudioUnitEffect.SetBypass]
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitEffect
type IAVAudioUnitGenerator ¶
type IAVAudioUnitGenerator interface {
IAVAudioUnit
AVAudio3DMixing
AVAudioMixing
AVAudioStereoMixing
// Creates a generator audio unit with the specified description.
InitWithAudioComponentDescription(audioComponentDescription objectivec.IObject) AVAudioUnitGenerator
// The bypass state of the audio unit.
Bypass() bool
SetBypass(value bool)
}
An interface definition for the AVAudioUnitGenerator class.
Creating an audio unit generator ¶
- [IAVAudioUnitGenerator.InitWithAudioComponentDescription]: Creates a generator audio unit with the specified description.
Getting and setting the bypass status ¶
- [IAVAudioUnitGenerator.Bypass]: The bypass state of the audio unit.
- [IAVAudioUnitGenerator.SetBypass]
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitGenerator
type IAVAudioUnitMIDIInstrument ¶
type IAVAudioUnitMIDIInstrument interface {
IAVAudioUnit
AVAudio3DMixing
AVAudioMixing
AVAudioStereoMixing
// Creates a MIDI instrument audio unit with the component description you specify.
InitWithAudioComponentDescription(description objectivec.IObject) AVAudioUnitMIDIInstrument
// Sends a MIDI controller event to the instrument.
SendControllerWithValueOnChannel(controller uint8, value uint8, channel uint8)
// Sends a MIDI event which contains one data byte to the instrument.
SendMIDIEventData1(midiStatus uint8, data1 uint8)
// Sends a MIDI event which contains two data bytes to the instrument.
SendMIDIEventData1Data2(midiStatus uint8, data1 uint8, data2 uint8)
// Sends a MIDI System Exclusive event to the instrument.
SendMIDISysExEvent(midiData foundation.INSData)
// Sends a MIDI Pitch Bend event to the instrument.
SendPitchBendOnChannel(pitchbend uint16, channel uint8)
// Sends a MIDI channel pressure event to the instrument.
SendPressureOnChannel(pressure uint8, channel uint8)
// Sends a MIDI Polyphonic key pressure event to the instrument.
SendPressureForKeyWithValueOnChannel(key uint8, value uint8, channel uint8)
// Sends MIDI Program Change and Bank Select events to the instrument.
SendProgramChangeOnChannel(program uint8, channel uint8)
// Sends MIDI Program Change and Bank Select events to the instrument.
SendProgramChangeBankMSBBankLSBOnChannel(program uint8, bankMSB uint8, bankLSB uint8, channel uint8)
// Sends a MIDI event list to the instrument.
SendMIDIEventList(eventList objectivec.IObject)
// Sends a MIDI Note On event to the instrument.
StartNoteWithVelocityOnChannel(note uint8, velocity uint8, channel uint8)
// Sends a MIDI Note Off event to the instrument.
StopNoteOnChannel(note uint8, channel uint8)
}
An interface definition for the AVAudioUnitMIDIInstrument class.
Creating a MIDI instrument ¶
- [IAVAudioUnitMIDIInstrument.InitWithAudioComponentDescription]: Creates a MIDI instrument audio unit with the component description you specify.
Sending information to the MIDI instrument ¶
- [IAVAudioUnitMIDIInstrument.SendControllerWithValueOnChannel]: Sends a MIDI controller event to the instrument.
- [IAVAudioUnitMIDIInstrument.SendMIDIEventData1]: Sends a MIDI event which contains one data byte to the instrument.
- [IAVAudioUnitMIDIInstrument.SendMIDIEventData1Data2]: Sends a MIDI event which contains two data bytes to the instrument.
- [IAVAudioUnitMIDIInstrument.SendMIDISysExEvent]: Sends a MIDI System Exclusive event to the instrument.
- [IAVAudioUnitMIDIInstrument.SendPitchBendOnChannel]: Sends a MIDI Pitch Bend event to the instrument.
- [IAVAudioUnitMIDIInstrument.SendPressureOnChannel]: Sends a MIDI channel pressure event to the instrument.
- [IAVAudioUnitMIDIInstrument.SendPressureForKeyWithValueOnChannel]: Sends a MIDI Polyphonic key pressure event to the instrument.
- [IAVAudioUnitMIDIInstrument.SendProgramChangeOnChannel]: Sends MIDI Program Change and Bank Select events to the instrument.
- [IAVAudioUnitMIDIInstrument.SendProgramChangeBankMSBBankLSBOnChannel]: Sends MIDI Program Change and Bank Select events to the instrument.
- [IAVAudioUnitMIDIInstrument.SendMIDIEventList]: Sends a MIDI event list to the instrument.
Starting and stopping play ¶
- [IAVAudioUnitMIDIInstrument.StartNoteWithVelocityOnChannel]: Sends a MIDI Note On event to the instrument.
- [IAVAudioUnitMIDIInstrument.StopNoteOnChannel]: Sends a MIDI Note Off event to the instrument.
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitMIDIInstrument
type IAVAudioUnitReverb ¶
type IAVAudioUnitReverb interface {
IAVAudioUnitEffect
// Configures the audio unit as a reverb preset.
LoadFactoryPreset(preset AVAudioUnitReverbPreset)
// The blend of the wet and dry signals.
WetDryMix() float32
SetWetDryMix(value float32)
}
An interface definition for the AVAudioUnitReverb class.
Configure the reverb ¶
- [IAVAudioUnitReverb.LoadFactoryPreset]: Configures the audio unit as a reverb preset.
Getting and setting the reverb values ¶
- [IAVAudioUnitReverb.WetDryMix]: The blend of the wet and dry signals.
- [IAVAudioUnitReverb.SetWetDryMix]
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitReverb
type IAVAudioUnitSampler ¶
type IAVAudioUnitSampler interface {
IAVAudioUnitMIDIInstrument
// Configures the sampler with the specified instrument file.
LoadInstrumentAtURLError(instrumentURL foundation.INSURL) (bool, error)
// Configures the sampler by loading the specified audio files.
LoadAudioFilesAtURLsError(audioFiles []foundation.NSURL) (bool, error)
// Loads a specific instrument from the specified soundbank.
LoadSoundBankInstrumentAtURLProgramBankMSBBankLSBError(bankURL foundation.INSURL, program uint8, bankMSB uint8, bankLSB uint8) (bool, error)
// An adjustment for the tuning of all the played notes.
GlobalTuning() float32
SetGlobalTuning(value float32)
// An adjustment for the gain of all the played notes, in decibels.
OverallGain() float32
SetOverallGain(value float32)
// An adjustment for the stereo panning of all the played notes.
StereoPan() float32
SetStereoPan(value float32)
}
An interface definition for the AVAudioUnitSampler class.
Configuring the Sampler Audio Unit ¶
- [IAVAudioUnitSampler.LoadInstrumentAtURLError]: Configures the sampler with the specified instrument file.
- [IAVAudioUnitSampler.LoadAudioFilesAtURLsError]: Configures the sampler by loading the specified audio files.
- [IAVAudioUnitSampler.LoadSoundBankInstrumentAtURLProgramBankMSBBankLSBError]: Loads a specific instrument from the specified soundbank.
Getting and Setting Sampler Values ¶
- [IAVAudioUnitSampler.GlobalTuning]: An adjustment for the tuning of all the played notes.
- [IAVAudioUnitSampler.SetGlobalTuning]
- [IAVAudioUnitSampler.OverallGain]: An adjustment for the gain of all the played notes, in decibels.
- [IAVAudioUnitSampler.SetOverallGain]
- [IAVAudioUnitSampler.StereoPan]: An adjustment for the stereo panning of all the played notes.
- [IAVAudioUnitSampler.SetStereoPan]
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitSampler
type IAVAudioUnitTimeEffect ¶
type IAVAudioUnitTimeEffect interface {
IAVAudioUnit
// Creates a time effect audio unit with the specified description.
InitWithAudioComponentDescription(audioComponentDescription objectivec.IObject) AVAudioUnitTimeEffect
// The bypass state of the audio unit.
Bypass() bool
SetBypass(value bool)
}
An interface definition for the AVAudioUnitTimeEffect class.
Creating a time effect ¶
- [IAVAudioUnitTimeEffect.InitWithAudioComponentDescription]: Creates a time effect audio unit with the specified description.
Getting and setting the time effect ¶
- [IAVAudioUnitTimeEffect.Bypass]: The bypass state of the audio unit.
- [IAVAudioUnitTimeEffect.SetBypass]
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTimeEffect
type IAVAudioUnitTimePitch ¶
type IAVAudioUnitTimePitch interface {
IAVAudioUnitTimeEffect
// The amount of overlap between segments of the input audio signal.
Overlap() float32
SetOverlap(value float32)
// The amount to use to pitch shift the input signal.
Pitch() float32
SetPitch(value float32)
// The playback rate of the input signal.
Rate() float32
SetRate(value float32)
}
An interface definition for the AVAudioUnitTimePitch class.
Getting and setting time pitch values ¶
- [IAVAudioUnitTimePitch.Overlap]: The amount of overlap between segments of the input audio signal.
- [IAVAudioUnitTimePitch.SetOverlap]
- [IAVAudioUnitTimePitch.Pitch]: The amount to use to pitch shift the input signal.
- [IAVAudioUnitTimePitch.SetPitch]
- [IAVAudioUnitTimePitch.Rate]: The playback rate of the input signal.
- [IAVAudioUnitTimePitch.SetRate]
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitTimePitch
type IAVAudioUnitVarispeed ¶
type IAVAudioUnitVarispeed interface {
IAVAudioUnitTimeEffect
// The audio playback rate.
Rate() float32
SetRate(value float32)
}
An interface definition for the AVAudioUnitVarispeed class.
Getting and setting the playback rate ¶
- [IAVAudioUnitVarispeed.Rate]: The audio playback rate.
- [IAVAudioUnitVarispeed.SetRate]
See: https://developer.apple.com/documentation/AVFAudio/AVAudioUnitVarispeed
type IAVExtendedNoteOnEvent ¶
type IAVExtendedNoteOnEvent interface {
IAVMusicEvent
// Creates an event with a MIDI note, velocity, group identifier, and duration.
InitWithMIDINoteVelocityGroupIDDuration(midiNote float32, velocity float32, groupID uint32, duration AVMusicTimeStamp) AVExtendedNoteOnEvent
// Creates a note on event with the default instrument.
InitWithMIDINoteVelocityInstrumentIDGroupIDDuration(midiNote float32, velocity float32, instrumentID uint32, groupID uint32, duration AVMusicTimeStamp) AVExtendedNoteOnEvent
// The MIDI note number.
MidiNote() float32
SetMidiNote(value float32)
// The MDI velocity.
Velocity() float32
SetVelocity(value float32)
// The instrument identifier.
InstrumentID() uint32
SetInstrumentID(value uint32)
// The audio unit channel that handles the event.
GroupID() uint32
SetGroupID(value uint32)
// The duration of the event, in beats.
Duration() AVMusicTimeStamp
SetDuration(value AVMusicTimeStamp)
}
An interface definition for the AVExtendedNoteOnEvent class.
Creating a Note On Event ¶
- [IAVExtendedNoteOnEvent.InitWithMIDINoteVelocityGroupIDDuration]: Creates an event with a MIDI note, velocity, group identifier, and duration.
- [IAVExtendedNoteOnEvent.InitWithMIDINoteVelocityInstrumentIDGroupIDDuration]: Creates a note on event with the default instrument.
Configuring a Note On Event ¶
- [IAVExtendedNoteOnEvent.MidiNote]: The MIDI note number.
- [IAVExtendedNoteOnEvent.SetMidiNote]
- [IAVExtendedNoteOnEvent.Velocity]: The MDI velocity.
- [IAVExtendedNoteOnEvent.SetVelocity]
- [IAVExtendedNoteOnEvent.InstrumentID]: The instrument identifier.
- [IAVExtendedNoteOnEvent.SetInstrumentID]
- [IAVExtendedNoteOnEvent.GroupID]: The audio unit channel that handles the event.
- [IAVExtendedNoteOnEvent.SetGroupID]
- [IAVExtendedNoteOnEvent.Duration]: The duration of the event, in beats.
- [IAVExtendedNoteOnEvent.SetDuration]
See: https://developer.apple.com/documentation/AVFAudio/AVExtendedNoteOnEvent
type IAVExtendedTempoEvent ¶
type IAVExtendedTempoEvent interface {
IAVMusicEvent
// Creates an extended tempo event.
InitWithTempo(tempo float64) AVExtendedTempoEvent
// The tempo in beats per minute as a positive value.
Tempo() float64
SetTempo(value float64)
}
An interface definition for the AVExtendedTempoEvent class.
Creating a Tempo Event ¶
- [IAVExtendedTempoEvent.InitWithTempo]: Creates an extended tempo event.
Configuring a Tempo Event ¶
- [IAVExtendedTempoEvent.Tempo]: The tempo in beats per minute as a positive value.
- [IAVExtendedTempoEvent.SetTempo]
See: https://developer.apple.com/documentation/AVFAudio/AVExtendedTempoEvent
type IAVMIDIChannelEvent ¶
type IAVMIDIChannelEvent interface {
IAVMusicEvent
// The MIDI channel.
Channel() uint32
SetChannel(value uint32)
}
An interface definition for the AVMIDIChannelEvent class.
Configuring a Channel Event ¶
- [IAVMIDIChannelEvent.Channel]: The MIDI channel.
- [IAVMIDIChannelEvent.SetChannel]
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIChannelEvent
type IAVMIDIChannelPressureEvent ¶
type IAVMIDIChannelPressureEvent interface {
IAVMIDIChannelEvent
// Creates a pressure event with a channel and pressure value.
InitWithChannelPressure(channel uint32, pressure uint32) AVMIDIChannelPressureEvent
// The MIDI channel pressure.
Pressure() uint32
SetPressure(value uint32)
}
An interface definition for the AVMIDIChannelPressureEvent class.
Creating a Pressure Event ¶
- [IAVMIDIChannelPressureEvent.InitWithChannelPressure]: Creates a pressure event with a channel and pressure value.
Configuring a Pressure Event ¶
- [IAVMIDIChannelPressureEvent.Pressure]: The MIDI channel pressure.
- [IAVMIDIChannelPressureEvent.SetPressure]
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIChannelPressureEvent
type IAVMIDIControlChangeEvent ¶
type IAVMIDIControlChangeEvent interface {
IAVMIDIChannelEvent
// Creates an event with a channel, control change type, and a value.
InitWithChannelMessageTypeValue(channel uint32, messageType AVMIDIControlChangeMessageType, value uint32) AVMIDIControlChangeEvent
// The value of the control change event.
Value() uint32
// The type of control change message.
MessageType() AVMIDIControlChangeMessageType
}
An interface definition for the AVMIDIControlChangeEvent class.
Creating a Control Change Event ¶
- [IAVMIDIControlChangeEvent.InitWithChannelMessageTypeValue]: Creates an event with a channel, control change type, and a value.
Inspecting a Control Change Event ¶
- [IAVMIDIControlChangeEvent.Value]: The value of the control change event.
- [IAVMIDIControlChangeEvent.MessageType]: The type of control change message.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIControlChangeEvent
type IAVMIDIMetaEvent ¶
type IAVMIDIMetaEvent interface {
IAVMusicEvent
// Creates an event with a MIDI meta event type and data.
InitWithTypeData(type_ AVMIDIMetaEventType, data foundation.INSData) AVMIDIMetaEvent
// The type of meta event.
Type() AVMIDIMetaEventType
}
An interface definition for the AVMIDIMetaEvent class.
Creating a Meta Event ¶
- [IAVMIDIMetaEvent.InitWithTypeData]: Creates an event with a MIDI meta event type and data.
Getting the Meta Event Type ¶
- [IAVMIDIMetaEvent.Type]: The type of meta event.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIMetaEvent
type IAVMIDINoteEvent ¶
type IAVMIDINoteEvent interface {
IAVMusicEvent
// Creates an event with a MIDI channel, key number, velocity, and duration.
InitWithChannelKeyVelocityDuration(channel uint32, keyNum uint32, velocity uint32, duration AVMusicTimeStamp) AVMIDINoteEvent
// The MIDI channel.
Channel() uint32
SetChannel(value uint32)
// The MIDI key number.
Key() uint32
SetKey(value uint32)
// The MIDI velocity.
Velocity() uint32
SetVelocity(value uint32)
// The duration for the note, in beats.
Duration() AVMusicTimeStamp
SetDuration(value AVMusicTimeStamp)
}
An interface definition for the AVMIDINoteEvent class.
Creating a MIDI Note Event ¶
- [IAVMIDINoteEvent.InitWithChannelKeyVelocityDuration]: Creates an event with a MIDI channel, key number, velocity, and duration.
Configuring a MIDI Note Event ¶
- [IAVMIDINoteEvent.Channel]: The MIDI channel.
- [IAVMIDINoteEvent.SetChannel]
- [IAVMIDINoteEvent.Key]: The MIDI key number.
- [IAVMIDINoteEvent.SetKey]
- [IAVMIDINoteEvent.Velocity]: The MIDI velocity.
- [IAVMIDINoteEvent.SetVelocity]
- [IAVMIDINoteEvent.Duration]: The duration for the note, in beats.
- [IAVMIDINoteEvent.SetDuration]
See: https://developer.apple.com/documentation/AVFAudio/AVMIDINoteEvent
type IAVMIDIPitchBendEvent ¶
type IAVMIDIPitchBendEvent interface {
IAVMIDIChannelEvent
// Creates an event with a channel and pitch bend value.
InitWithChannelValue(channel uint32, value uint32) AVMIDIPitchBendEvent
// The value of the pitch bend event.
Value() uint32
SetValue(value uint32)
}
An interface definition for the AVMIDIPitchBendEvent class.
Creating a Pitch Bend Event ¶
- [IAVMIDIPitchBendEvent.InitWithChannelValue]: Creates an event with a channel and pitch bend value.
Configuring a Pitch Bend Event ¶
- [IAVMIDIPitchBendEvent.Value]: The value of the pitch bend event.
- [IAVMIDIPitchBendEvent.SetValue]
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPitchBendEvent
type IAVMIDIPlayer ¶
type IAVMIDIPlayer interface {
objectivec.IObject
// Creates a player to play a MIDI file with the specified soundbank.
InitWithContentsOfURLSoundBankURLError(inURL foundation.INSURL, bankURL foundation.INSURL) (AVMIDIPlayer, error)
// Creates a player to play MIDI data with the specified soundbank.
InitWithDataSoundBankURLError(data foundation.INSData, bankURL foundation.INSURL) (AVMIDIPlayer, error)
// Prepares the player to play the sequence by prerolling all events.
PrepareToPlay()
// Plays the MIDI sequence.
Play(completionHandler ErrorHandler)
// Stops playing the sequence.
Stop()
// A Boolean value that indicates whether the sequence is playing.
Playing() bool
// The playback rate of the player.
Rate() float32
SetRate(value float32)
// The current playback position, in seconds.
CurrentPosition() float64
SetCurrentPosition(value float64)
// The duration, in seconds, of the currently loaded file.
Duration() float64
}
An interface definition for the AVMIDIPlayer class.
Creating a MIDI player ¶
- [IAVMIDIPlayer.InitWithContentsOfURLSoundBankURLError]: Creates a player to play a MIDI file with the specified soundbank.
- [IAVMIDIPlayer.InitWithDataSoundBankURLError]: Creates a player to play MIDI data with the specified soundbank.
Controlling playback ¶
- [IAVMIDIPlayer.PrepareToPlay]: Prepares the player to play the sequence by prerolling all events.
- [IAVMIDIPlayer.Play]: Plays the MIDI sequence.
- [IAVMIDIPlayer.Stop]: Stops playing the sequence.
- [IAVMIDIPlayer.Playing]: A Boolean value that indicates whether the sequence is playing.
Configuring playback settings ¶
- [IAVMIDIPlayer.Rate]: The playback rate of the player.
- [IAVMIDIPlayer.SetRate]
Accessing player timing ¶
- [IAVMIDIPlayer.CurrentPosition]: The current playback position, in seconds.
- [IAVMIDIPlayer.SetCurrentPosition]
- [IAVMIDIPlayer.Duration]: The duration, in seconds, of the currently loaded file.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPlayer
type IAVMIDIPolyPressureEvent ¶
type IAVMIDIPolyPressureEvent interface {
IAVMIDIChannelEvent
// Creates an event with a channel, MIDI key number, and a key pressure value.
InitWithChannelKeyPressure(channel uint32, key uint32, pressure uint32) AVMIDIPolyPressureEvent
// The MIDI key number.
Key() uint32
SetKey(value uint32)
// The poly pressure value for the requested key.
Pressure() uint32
SetPressure(value uint32)
}
An interface definition for the AVMIDIPolyPressureEvent class.
Creating a Poly Pressure Event ¶
- [IAVMIDIPolyPressureEvent.InitWithChannelKeyPressure]: Creates an event with a channel, MIDI key number, and a key pressure value.
Configuring a Poly Pressure Event ¶
- [IAVMIDIPolyPressureEvent.Key]: The MIDI key number.
- [IAVMIDIPolyPressureEvent.SetKey]
- [IAVMIDIPolyPressureEvent.Pressure]: The poly pressure value for the requested key.
- [IAVMIDIPolyPressureEvent.SetPressure]
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIPolyPressureEvent
type IAVMIDIProgramChangeEvent ¶
type IAVMIDIProgramChangeEvent interface {
IAVMIDIChannelEvent
// Creates a program change event with a channel and program number.
InitWithChannelProgramNumber(channel uint32, programNumber uint32) AVMIDIProgramChangeEvent
// The MIDI program number.
ProgramNumber() uint32
SetProgramNumber(value uint32)
}
An interface definition for the AVMIDIProgramChangeEvent class.
Creating a Program Change Event ¶
- [IAVMIDIProgramChangeEvent.InitWithChannelProgramNumber]: Creates a program change event with a channel and program number.
Configuring a Program Change Event ¶
- [IAVMIDIProgramChangeEvent.ProgramNumber]: The MIDI program number.
- [IAVMIDIProgramChangeEvent.SetProgramNumber]
See: https://developer.apple.com/documentation/AVFAudio/AVMIDIProgramChangeEvent
type IAVMIDISysexEvent ¶
type IAVMIDISysexEvent interface {
IAVMusicEvent
// Creates a system event with the data you specify.
InitWithData(data foundation.INSData) AVMIDISysexEvent
// The size of the data that this event contains.
SizeInBytes() uint32
}
An interface definition for the AVMIDISysexEvent class.
Creates a System Event ¶
- [IAVMIDISysexEvent.InitWithData]: Creates a system event with the data you specify.
Getting the Size of the Event ¶
- [IAVMIDISysexEvent.SizeInBytes]: The size of the data that this event contains.
See: https://developer.apple.com/documentation/AVFAudio/AVMIDISysexEvent
type IAVMusicEvent ¶
type IAVMusicEvent interface {
objectivec.IObject
}
An interface definition for the AVMusicEvent class.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicEvent
type IAVMusicTrack ¶
type IAVMusicTrack interface {
objectivec.IObject
// A Boolean value that indicates whether the track is in a muted state.
Muted() bool
SetMuted(value bool)
// A Boolean value that indicates whether the track is in a soloed state.
Soloed() bool
SetSoloed(value bool)
// The offset of the track’s start time, in beats.
OffsetTime() AVMusicTimeStamp
SetOffsetTime(value AVMusicTimeStamp)
// The time resolution value for the sequence, in ticks (pulses) per quarter note.
TimeResolution() uint
// A Boolean value that indicates whether the track is an automation track.
UsesAutomatedParameters() bool
SetUsesAutomatedParameters(value bool)
// The total duration of the track, in beats.
LengthInBeats() AVMusicTimeStamp
SetLengthInBeats(value AVMusicTimeStamp)
// The total duration of the track, in seconds.
LengthInSeconds() float64
SetLengthInSeconds(value float64)
// The audio unit that receives the track’s events.
DestinationAudioUnit() IAVAudioUnit
SetDestinationAudioUnit(value IAVAudioUnit)
// The MIDI endpoint you specify as the track’s target.
DestinationMIDIEndpoint() objectivec.IObject
SetDestinationMIDIEndpoint(value objectivec.IObject)
// A Boolean value that indicates whether the track is in a looping state.
LoopingEnabled() bool
SetLoopingEnabled(value bool)
// The timestamp range for the loop, in beats.
LoopRange() AVBeatRange
SetLoopRange(value AVBeatRange)
// The number of times the track’s loop repeats.
NumberOfLoops() int
SetNumberOfLoops(value int)
// Adds a music event to a track at the time you specify.
AddEventAtBeat(event IAVMusicEvent, beat AVMusicTimeStamp)
// Moves the beat location of all events in the given beat range by the amount you specify.
MoveEventsInRangeByAmount(range_ AVBeatRange, beatAmount AVMusicTimeStamp)
// Removes all events in the given beat range from the music track.
ClearEventsInRange(range_ AVBeatRange)
// Splices all events in the beat range from the music track.
CutEventsInRange(range_ AVBeatRange)
// Copies the events from the source track and splices them into the current music track.
CopyEventsInRangeFromTrackInsertAtBeat(range_ AVBeatRange, sourceTrack IAVMusicTrack, insertStartBeat AVMusicTimeStamp)
// Copies the events from the source track and merges them into the current music track.
CopyAndMergeEventsInRangeFromTrackMergeAtBeat(range_ AVBeatRange, sourceTrack IAVMusicTrack, mergeStartBeat AVMusicTimeStamp)
// Iterates through the music events within the track.
EnumerateEventsInRangeUsingBlock(range_ AVBeatRange, block AVMusicEventEnumerationBlock)
// A timestamp you use to access all events in a music track through a beat range.
AVMusicTimeStampEndOfTrack() float64
SetAVMusicTimeStampEndOfTrack(value float64)
}
An interface definition for the AVMusicTrack class.
Configuring Music Track Properties ¶
- [IAVMusicTrack.Muted]: A Boolean value that indicates whether the track is in a muted state.
- [IAVMusicTrack.SetMuted]
- [IAVMusicTrack.Soloed]: A Boolean value that indicates whether the track is in a soloed state.
- [IAVMusicTrack.SetSoloed]
- [IAVMusicTrack.OffsetTime]: The offset of the track’s start time, in beats.
- [IAVMusicTrack.SetOffsetTime]
- [IAVMusicTrack.TimeResolution]: The time resolution value for the sequence, in ticks (pulses) per quarter note.
- [IAVMusicTrack.UsesAutomatedParameters]: A Boolean value that indicates whether the track is an automation track.
- [IAVMusicTrack.SetUsesAutomatedParameters]
Configuring the Track Duration ¶
- [IAVMusicTrack.LengthInBeats]: The total duration of the track, in beats.
- [IAVMusicTrack.SetLengthInBeats]
- [IAVMusicTrack.LengthInSeconds]: The total duration of the track, in seconds.
- [IAVMusicTrack.SetLengthInSeconds]
Configuring the Track Destinations ¶
- [IAVMusicTrack.DestinationAudioUnit]: The audio unit that receives the track’s events.
- [IAVMusicTrack.SetDestinationAudioUnit]
- [IAVMusicTrack.DestinationMIDIEndpoint]: The MIDI endpoint you specify as the track’s target.
- [IAVMusicTrack.SetDestinationMIDIEndpoint]
Configuring the Looping State ¶
- [IAVMusicTrack.LoopingEnabled]: A Boolean value that indicates whether the track is in a looping state.
- [IAVMusicTrack.SetLoopingEnabled]
- [IAVMusicTrack.LoopRange]: The timestamp range for the loop, in beats.
- [IAVMusicTrack.SetLoopRange]
- [IAVMusicTrack.NumberOfLoops]: The number of times the track’s loop repeats.
- [IAVMusicTrack.SetNumberOfLoops]
Adding and Clearing Events ¶
- [IAVMusicTrack.AddEventAtBeat]: Adds a music event to a track at the time you specify.
- [IAVMusicTrack.MoveEventsInRangeByAmount]: Moves the beat location of all events in the given beat range by the amount you specify.
- [IAVMusicTrack.ClearEventsInRange]: Removes all events in the given beat range from the music track.
Cutting and Copying Events ¶
- [IAVMusicTrack.CutEventsInRange]: Splices all events in the beat range from the music track.
- [IAVMusicTrack.CopyEventsInRangeFromTrackInsertAtBeat]: Copies the events from the source track and splices them into the current music track.
- [IAVMusicTrack.CopyAndMergeEventsInRangeFromTrackMergeAtBeat]: Copies the events from the source track and merges them into the current music track.
Iterating Over Events ¶
- [IAVMusicTrack.EnumerateEventsInRangeUsingBlock]: Iterates through the music events within the track.
Getting the End of Track Timestamp ¶
- [IAVMusicTrack.AVMusicTimeStampEndOfTrack]: A timestamp you use to access all events in a music track through a beat range.
- [IAVMusicTrack.SetAVMusicTimeStampEndOfTrack]
See: https://developer.apple.com/documentation/AVFAudio/AVMusicTrack
type IAVMusicUserEvent ¶
type IAVMusicUserEvent interface {
IAVMusicEvent
// Creates a user event with the data you specify.
InitWithData(data foundation.INSData) AVMusicUserEvent
// The size of the data that the user event represents.
SizeInBytes() uint32
}
An interface definition for the AVMusicUserEvent class.
Creating a User Event ¶
- [IAVMusicUserEvent.InitWithData]: Creates a user event with the data you specify.
Inspecting a User Event ¶
- [IAVMusicUserEvent.SizeInBytes]: The size of the data that the user event represents.
See: https://developer.apple.com/documentation/AVFAudio/AVMusicUserEvent
type IAVParameterEvent ¶
type IAVParameterEvent interface {
IAVMusicEvent
// Creates an event with a parameter identifier, scope, element, and value for the parameter to set.
InitWithParameterIDScopeElementValue(parameterID uint32, scope uint32, element uint32, value float32) AVParameterEvent
// The identifier of the parameter.
ParameterID() uint32
SetParameterID(value uint32)
// The audio unit scope for the parameter.
Scope() uint32
SetScope(value uint32)
// The element index in the scope.
Element() uint32
SetElement(value uint32)
// The value of the parameter to set.
Value() float32
SetValue(value float32)
}
An interface definition for the AVParameterEvent class.
Creating a Parameter Event ¶
- [IAVParameterEvent.InitWithParameterIDScopeElementValue]: Creates an event with a parameter identifier, scope, element, and value for the parameter to set.
Configuring a Parameter Event ¶
- [IAVParameterEvent.ParameterID]: The identifier of the parameter.
- [IAVParameterEvent.SetParameterID]
- [IAVParameterEvent.Scope]: The audio unit scope for the parameter.
- [IAVParameterEvent.SetScope]
- [IAVParameterEvent.Element]: The element index in the scope.
- [IAVParameterEvent.SetElement]
- [IAVParameterEvent.Value]: The value of the parameter to set.
- [IAVParameterEvent.SetValue]
See: https://developer.apple.com/documentation/AVFAudio/AVParameterEvent
type IAVSpeechSynthesisMarker ¶
type IAVSpeechSynthesisMarker interface {
objectivec.IObject
// Creates a marker with a type and location of the request’s text.
InitWithMarkerTypeForTextRangeAtByteSampleOffset(type_ AVSpeechSynthesisMarkerMark, range_ foundation.NSRange, byteSampleOffset uint) AVSpeechSynthesisMarker
// Creates a word marker with a range of the word and offset into the audio buffer.
InitWithWordRangeAtByteSampleOffset(range_ foundation.NSRange, byteSampleOffset int) AVSpeechSynthesisMarker
// Creates a sentence marker with a range of the sentence and offset into the audio buffer.
InitWithSentenceRangeAtByteSampleOffset(range_ foundation.NSRange, byteSampleOffset int) AVSpeechSynthesisMarker
// Creates a paragraph marker with a range of the paragraph and offset into the audio buffer.
InitWithParagraphRangeAtByteSampleOffset(range_ foundation.NSRange, byteSampleOffset int) AVSpeechSynthesisMarker
// Creates a phoneme marker with a range of the phoneme and offset into the audio buffer.
InitWithPhonemeStringAtByteSampleOffset(phoneme string, byteSampleOffset int) AVSpeechSynthesisMarker
// Creates a bookmark marker with a name and offset into the audio buffer.
InitWithBookmarkNameAtByteSampleOffset(mark string, byteSampleOffset int) AVSpeechSynthesisMarker
// The type that describes the text.
Mark() AVSpeechSynthesisMarkerMark
SetMark(value AVSpeechSynthesisMarkerMark)
// A string that represents the name of a bookmark.
BookmarkName() string
SetBookmarkName(value string)
// A string that represents a distinct sound.
Phoneme() string
SetPhoneme(value string)
// The location and length of the request’s text.
TextRange() foundation.NSRange
SetTextRange(value foundation.NSRange)
// The byte offset into the audio buffer.
ByteSampleOffset() uint
SetByteSampleOffset(value uint)
// A block that subclasses use to send marker information to the host.
SpeechSynthesisOutputMetadataBlock() AVSpeechSynthesisProviderOutputBlock
SetSpeechSynthesisOutputMetadataBlock(value AVSpeechSynthesisProviderOutputBlock)
EncodeWithCoder(coder foundation.INSCoder)
}
An interface definition for the AVSpeechSynthesisMarker class.
Creating a marker ¶
- [IAVSpeechSynthesisMarker.InitWithMarkerTypeForTextRangeAtByteSampleOffset]: Creates a marker with a type and location of the request’s text.
- [IAVSpeechSynthesisMarker.InitWithWordRangeAtByteSampleOffset]: Creates a word marker with a range of the word and offset into the audio buffer.
- [IAVSpeechSynthesisMarker.InitWithSentenceRangeAtByteSampleOffset]: Creates a sentence marker with a range of the sentence and offset into the audio buffer.
- [IAVSpeechSynthesisMarker.InitWithParagraphRangeAtByteSampleOffset]: Creates a paragraph marker with a range of the paragraph and offset into the audio buffer.
- [IAVSpeechSynthesisMarker.InitWithPhonemeStringAtByteSampleOffset]: Creates a phoneme marker with a range of the phoneme and offset into the audio buffer.
- [IAVSpeechSynthesisMarker.InitWithBookmarkNameAtByteSampleOffset]: Creates a bookmark marker with a name and offset into the audio buffer.
Inspecting a marker ¶
- [IAVSpeechSynthesisMarker.Mark]: The type that describes the text.
- [IAVSpeechSynthesisMarker.SetMark]
- [IAVSpeechSynthesisMarker.BookmarkName]: A string that represents the name of a bookmark.
- [IAVSpeechSynthesisMarker.SetBookmarkName]
- [IAVSpeechSynthesisMarker.Phoneme]: A string that represents a distinct sound.
- [IAVSpeechSynthesisMarker.SetPhoneme]
- [IAVSpeechSynthesisMarker.TextRange]: The location and length of the request’s text.
- [IAVSpeechSynthesisMarker.SetTextRange]
- [IAVSpeechSynthesisMarker.ByteSampleOffset]: The byte offset into the audio buffer.
- [IAVSpeechSynthesisMarker.SetByteSampleOffset]
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisMarker
type IAVSpeechSynthesisProviderAudioUnit ¶
type IAVSpeechSynthesisProviderAudioUnit interface {
objectivec.IObject
// Sets the text to synthesize and the voice to use.
SynthesizeSpeechRequest(speechRequest IAVSpeechSynthesisProviderRequest)
// A block that subclasses use to send marker information to the host.
SpeechSynthesisOutputMetadataBlock() AVSpeechSynthesisProviderOutputBlock
SetSpeechSynthesisOutputMetadataBlock(value AVSpeechSynthesisProviderOutputBlock)
// A list of voices the audio unit provides to the system.
SpeechVoices() []AVSpeechSynthesisProviderVoice
SetSpeechVoices(value []AVSpeechSynthesisProviderVoice)
// Informs the audio unit to discard the speech request.
CancelSpeechRequest()
}
An interface definition for the AVSpeechSynthesisProviderAudioUnit class.
Rendering speech ¶
- [IAVSpeechSynthesisProviderAudioUnit.SynthesizeSpeechRequest]: Sets the text to synthesize and the voice to use.
Supplying metadata ¶
- [IAVSpeechSynthesisProviderAudioUnit.SpeechSynthesisOutputMetadataBlock]: A block that subclasses use to send marker information to the host.
- [IAVSpeechSynthesisProviderAudioUnit.SetSpeechSynthesisOutputMetadataBlock]
Getting and setting voices ¶
- [IAVSpeechSynthesisProviderAudioUnit.SpeechVoices]: A list of voices the audio unit provides to the system.
- [IAVSpeechSynthesisProviderAudioUnit.SetSpeechVoices]
Cancelling a request ¶
- [IAVSpeechSynthesisProviderAudioUnit.CancelSpeechRequest]: Informs the audio unit to discard the speech request.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisProviderAudioUnit
type IAVSpeechSynthesisProviderRequest ¶
type IAVSpeechSynthesisProviderRequest interface {
objectivec.IObject
// Creates a request with a voice and a description.
InitWithSSMLRepresentationVoice(text string, voice IAVSpeechSynthesisProviderVoice) AVSpeechSynthesisProviderRequest
// The description of the text to synthesize.
SsmlRepresentation() string
// The voice to use in the speech request.
Voice() IAVSpeechSynthesisProviderVoice
EncodeWithCoder(coder foundation.INSCoder)
}
An interface definition for the AVSpeechSynthesisProviderRequest class.
Creating a request ¶
- [IAVSpeechSynthesisProviderRequest.InitWithSSMLRepresentationVoice]: Creates a request with a voice and a description.
Inspecting a request ¶
- [IAVSpeechSynthesisProviderRequest.SsmlRepresentation]: The description of the text to synthesize.
- [IAVSpeechSynthesisProviderRequest.Voice]: The voice to use in the speech request.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisProviderRequest
type IAVSpeechSynthesisProviderVoice ¶
type IAVSpeechSynthesisProviderVoice interface {
objectivec.IObject
// Creates a voice with a name, an identifier, and language information.
InitWithNameIdentifierPrimaryLanguagesSupportedLanguages(name string, identifier string, primaryLanguages []string, supportedLanguages []string) AVSpeechSynthesisProviderVoice
// The age of the voice, in years.
Age() int
SetAge(value int)
// The gender of the voice.
Gender() AVSpeechSynthesisVoiceGender
SetGender(value AVSpeechSynthesisVoiceGender)
// The unique identifier for the voice.
Identifier() string
// The localized name of the voice.
Name() string
// A list of BCP 47 codes that identify the languages the synthesizer uses.
PrimaryLanguages() []string
// A list of BCP 47 codes that identify the languages a voice supports.
SupportedLanguages() []string
// The version of the voice.
Version() string
SetVersion(value string)
// The size of the voice package on disk, in bytes.
VoiceSize() int64
SetVoiceSize(value int64)
// A list of voices the audio unit provides to the system.
SpeechVoices() IAVSpeechSynthesisProviderVoice
SetSpeechVoices(value IAVSpeechSynthesisProviderVoice)
EncodeWithCoder(coder foundation.INSCoder)
}
An interface definition for the AVSpeechSynthesisProviderVoice class.
Creating a voice ¶
- [IAVSpeechSynthesisProviderVoice.InitWithNameIdentifierPrimaryLanguagesSupportedLanguages]: Creates a voice with a name, an identifier, and language information.
Inspecting a voice ¶
- [IAVSpeechSynthesisProviderVoice.Age]: The age of the voice, in years.
- [IAVSpeechSynthesisProviderVoice.SetAge]
- [IAVSpeechSynthesisProviderVoice.Gender]: The gender of the voice.
- [IAVSpeechSynthesisProviderVoice.SetGender]
- [IAVSpeechSynthesisProviderVoice.Identifier]: The unique identifier for the voice.
- [IAVSpeechSynthesisProviderVoice.Name]: The localized name of the voice.
- [IAVSpeechSynthesisProviderVoice.PrimaryLanguages]: A list of BCP 47 codes that identify the languages the synthesizer uses.
- [IAVSpeechSynthesisProviderVoice.SupportedLanguages]: A list of BCP 47 codes that identify the languages a voice supports.
- [IAVSpeechSynthesisProviderVoice.Version]: The version of the voice.
- [IAVSpeechSynthesisProviderVoice.SetVersion]
- [IAVSpeechSynthesisProviderVoice.VoiceSize]: The size of the voice package on disk, in bytes.
- [IAVSpeechSynthesisProviderVoice.SetVoiceSize]
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisProviderVoice
type IAVSpeechSynthesisVoice ¶
type IAVSpeechSynthesisVoice interface {
objectivec.IObject
// The voice that the system identifies as Alex.
AVSpeechSynthesisVoiceIdentifierAlex() string
// The unique identifier of a voice.
Identifier() string
// The name of a voice.
Name() string
// The speech quality of a voice.
Quality() AVSpeechSynthesisVoiceQuality
// The gender for a voice.
Gender() AVSpeechSynthesisVoiceGender
// The traits of a voice.
VoiceTraits() AVSpeechSynthesisVoiceTraits
// A dictionary that contains audio file settings.
AudioFileSettings() foundation.INSDictionary
// A BCP 47 code that contains the voice’s language and locale.
Language() string
// The voice the speech synthesizer uses when speaking the utterance.
Voice() IAVSpeechSynthesisVoice
SetVoice(value IAVSpeechSynthesisVoice)
EncodeWithCoder(coder foundation.INSCoder)
}
An interface definition for the AVSpeechSynthesisVoice class.
Obtaining voices ¶
- [IAVSpeechSynthesisVoice.AVSpeechSynthesisVoiceIdentifierAlex]: The voice that the system identifies as Alex.
Inspecting voices ¶
- [IAVSpeechSynthesisVoice.Identifier]: The unique identifier of a voice.
- [IAVSpeechSynthesisVoice.Name]: The name of a voice.
- [IAVSpeechSynthesisVoice.Quality]: The speech quality of a voice.
- [IAVSpeechSynthesisVoice.Gender]: The gender for a voice.
- [IAVSpeechSynthesisVoice.VoiceTraits]: The traits of a voice.
- [IAVSpeechSynthesisVoice.AudioFileSettings]: A dictionary that contains audio file settings.
Working with language codes ¶
- [IAVSpeechSynthesisVoice.Language]: A BCP 47 code that contains the voice’s language and locale.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesisVoice
type IAVSpeechSynthesizer ¶
type IAVSpeechSynthesizer interface {
objectivec.IObject
// Adds the utterance you specify to the speech synthesizer’s queue.
SpeakUtterance(utterance IAVSpeechUtterance)
// Resumes speech from its paused point.
ContinueSpeaking() bool
// Pauses speech at the boundary you specify.
PauseSpeakingAtBoundary(boundary AVSpeechBoundary) bool
// Stops speech at the boundary you specify.
StopSpeakingAtBoundary(boundary AVSpeechBoundary) bool
// A Boolean value that indicates whether the speech synthesizer is speaking or is in a paused state and has utterances to speak.
Speaking() bool
// A Boolean value that indicates whether a speech synthesizer is in a paused state.
Paused() bool
// The delegate object for the speech synthesizer.
Delegate() AVSpeechSynthesizerDelegate
SetDelegate(value AVSpeechSynthesizerDelegate)
// Generates speech for the utterance and invokes the callback with the audio buffer.
WriteUtteranceToBufferCallback(utterance IAVSpeechUtterance, bufferCallback AVSpeechSynthesizerBufferCallback)
// Generates audio buffers and associated metadata for storage or further speech synthesis processing.
WriteUtteranceToBufferCallbackToMarkerCallback(utterance IAVSpeechUtterance, bufferCallback AVSpeechSynthesizerBufferCallback, markerCallback AVSpeechSynthesizerMarkerCallback)
// The amount of time the speech synthesizer pauses before speaking the utterance.
PreUtteranceDelay() float64
SetPreUtteranceDelay(value float64)
// The voice the speech synthesizer uses when speaking the utterance.
Voice() IAVSpeechSynthesisVoice
SetVoice(value IAVSpeechSynthesisVoice)
}
An interface definition for the AVSpeechSynthesizer class.
Controlling speech ¶
- [IAVSpeechSynthesizer.SpeakUtterance]: Adds the utterance you specify to the speech synthesizer’s queue.
- [IAVSpeechSynthesizer.ContinueSpeaking]: Resumes speech from its paused point.
- [IAVSpeechSynthesizer.PauseSpeakingAtBoundary]: Pauses speech at the boundary you specify.
- [IAVSpeechSynthesizer.StopSpeakingAtBoundary]: Stops speech at the boundary you specify.
Inspecting a speech synthesizer ¶
- [IAVSpeechSynthesizer.Speaking]: A Boolean value that indicates whether the speech synthesizer is speaking or is in a paused state and has utterances to speak.
- [IAVSpeechSynthesizer.Paused]: A Boolean value that indicates whether a speech synthesizer is in a paused state.
Managing the delegate ¶
- [IAVSpeechSynthesizer.Delegate]: The delegate object for the speech synthesizer.
- [IAVSpeechSynthesizer.SetDelegate]
Directing speech output ¶
- [IAVSpeechSynthesizer.WriteUtteranceToBufferCallback]: Generates speech for the utterance and invokes the callback with the audio buffer.
- [IAVSpeechSynthesizer.WriteUtteranceToBufferCallbackToMarkerCallback]: Generates audio buffers and associated metadata for storage or further speech synthesis processing.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechSynthesizer
type IAVSpeechUtterance ¶
type IAVSpeechUtterance interface {
objectivec.IObject
// Creates an utterance with the text string that you specify for the speech synthesizer to speak.
InitWithString(string_ string) AVSpeechUtterance
// Creates an utterance with the attributed text string that you specify for the speech synthesizer to speak.
InitWithAttributedString(string_ foundation.NSAttributedString) AVSpeechUtterance
// A string that contains International Phonetic Alphabet (IPA) symbols the speech synthesizer uses to control pronunciation of certain words or phrases.
AVSpeechSynthesisIPANotationAttribute() string
// Creates a speech utterance with an Speech Synthesis Markup Language (SSML) string.
InitWithSSMLRepresentation(string_ string) AVSpeechUtterance
// The voice the speech synthesizer uses when speaking the utterance.
Voice() IAVSpeechSynthesisVoice
SetVoice(value IAVSpeechSynthesisVoice)
// The baseline pitch the speech synthesizer uses when speaking the utterance.
PitchMultiplier() float32
SetPitchMultiplier(value float32)
// The volume the speech synthesizer uses when speaking the utterance.
Volume() float32
SetVolume(value float32)
// A Boolean that specifies whether assistive technology settings take precedence over the property values of this utterance.
PrefersAssistiveTechnologySettings() bool
SetPrefersAssistiveTechnologySettings(value bool)
// The rate the speech synthesizer uses when speaking the utterance.
Rate() float32
SetRate(value float32)
// The minimum rate the speech synthesizer uses when speaking an utterance.
AVSpeechUtteranceMinimumSpeechRate() float32
// The maximum rate the speech synthesizer uses when speaking an utterance.
AVSpeechUtteranceMaximumSpeechRate() float32
// The default rate the speech synthesizer uses when speaking an utterance.
AVSpeechUtteranceDefaultSpeechRate() float32
// The amount of time the speech synthesizer pauses before speaking the utterance.
PreUtteranceDelay() float64
SetPreUtteranceDelay(value float64)
// The amount of time the speech synthesizer pauses after speaking an utterance before handling the next utterance in the queue.
PostUtteranceDelay() float64
SetPostUtteranceDelay(value float64)
// A string that contains the text for speech synthesis.
SpeechString() string
// An attributed string that contains the text for speech synthesis.
AttributedSpeechString() foundation.NSAttributedString
EncodeWithCoder(coder foundation.INSCoder)
}
An interface definition for the AVSpeechUtterance class.
Creating an utterance ¶
- [IAVSpeechUtterance.InitWithString]: Creates an utterance with the text string that you specify for the speech synthesizer to speak.
- [IAVSpeechUtterance.InitWithAttributedString]: Creates an utterance with the attributed text string that you specify for the speech synthesizer to speak.
- [IAVSpeechUtterance.AVSpeechSynthesisIPANotationAttribute]: A string that contains International Phonetic Alphabet (IPA) symbols the speech synthesizer uses to control pronunciation of certain words or phrases.
- [IAVSpeechUtterance.InitWithSSMLRepresentation]: Creates a speech utterance with an Speech Synthesis Markup Language (SSML) string.
Configuring an utterance ¶
- [IAVSpeechUtterance.Voice]: The voice the speech synthesizer uses when speaking the utterance.
- [IAVSpeechUtterance.SetVoice]
- [IAVSpeechUtterance.PitchMultiplier]: The baseline pitch the speech synthesizer uses when speaking the utterance.
- [IAVSpeechUtterance.SetPitchMultiplier]
- [IAVSpeechUtterance.Volume]: The volume the speech synthesizer uses when speaking the utterance.
- [IAVSpeechUtterance.SetVolume]
- [IAVSpeechUtterance.PrefersAssistiveTechnologySettings]: A Boolean that specifies whether assistive technology settings take precedence over the property values of this utterance.
- [IAVSpeechUtterance.SetPrefersAssistiveTechnologySettings]
Configuring utterance timing ¶
- [IAVSpeechUtterance.Rate]: The rate the speech synthesizer uses when speaking the utterance.
- [IAVSpeechUtterance.SetRate]
- [IAVSpeechUtterance.AVSpeechUtteranceMinimumSpeechRate]: The minimum rate the speech synthesizer uses when speaking an utterance.
- [IAVSpeechUtterance.AVSpeechUtteranceMaximumSpeechRate]: The maximum rate the speech synthesizer uses when speaking an utterance.
- [IAVSpeechUtterance.AVSpeechUtteranceDefaultSpeechRate]: The default rate the speech synthesizer uses when speaking an utterance.
- [IAVSpeechUtterance.PreUtteranceDelay]: The amount of time the speech synthesizer pauses before speaking the utterance.
- [IAVSpeechUtterance.SetPreUtteranceDelay]
- [IAVSpeechUtterance.PostUtteranceDelay]: The amount of time the speech synthesizer pauses after speaking an utterance before handling the next utterance in the queue.
- [IAVSpeechUtterance.SetPostUtteranceDelay]
Inspecting utterance text ¶
- [IAVSpeechUtterance.SpeechString]: A string that contains the text for speech synthesis.
- [IAVSpeechUtterance.AttributedSpeechString]: An attributed string that contains the text for speech synthesis.
See: https://developer.apple.com/documentation/AVFAudio/AVSpeechUtterance
Source Files
¶
- audio3_d_mixing_protocol.gen.go
- audio_mixing_protocol.gen.go
- audio_player_delegate_protocol.gen.go
- audio_recorder_delegate_protocol.gen.go
- audio_stereo_mixing_protocol.gen.go
- av_audio_application.gen.go
- av_audio_buffer.gen.go
- av_audio_channel_layout.gen.go
- av_audio_compressed_buffer.gen.go
- av_audio_connection_point.gen.go
- av_audio_converter.gen.go
- av_audio_engine.gen.go
- av_audio_environment_distance_attenuation_parameters.gen.go
- av_audio_environment_node.gen.go
- av_audio_environment_reverb_parameters.gen.go
- av_audio_file.gen.go
- av_audio_format.gen.go
- av_audio_input_node.gen.go
- av_audio_io_node.gen.go
- av_audio_mixer_node.gen.go
- av_audio_mixing_destination.gen.go
- av_audio_node.gen.go
- av_audio_output_node.gen.go
- av_audio_pcm_buffer.gen.go
- av_audio_player.gen.go
- av_audio_player_node.gen.go
- av_audio_recorder.gen.go
- av_audio_routing_arbiter.gen.go
- av_audio_sequencer.gen.go
- av_audio_session_capability.gen.go
- av_audio_sink_node.gen.go
- av_audio_source_node.gen.go
- av_audio_time.gen.go
- av_audio_unit.gen.go
- av_audio_unit_component.gen.go
- av_audio_unit_component_manager.gen.go
- av_audio_unit_delay.gen.go
- av_audio_unit_distortion.gen.go
- av_audio_unit_effect.gen.go
- av_audio_unit_eq.gen.go
- av_audio_unit_eq_filter_parameters.gen.go
- av_audio_unit_generator.gen.go
- av_audio_unit_midi_instrument.gen.go
- av_audio_unit_reverb.gen.go
- av_audio_unit_sampler.gen.go
- av_audio_unit_time_effect.gen.go
- av_audio_unit_time_pitch.gen.go
- av_audio_unit_varispeed.gen.go
- av_extended_note_on_event.gen.go
- av_extended_tempo_event.gen.go
- av_music_event.gen.go
- av_music_track.gen.go
- av_music_user_event.gen.go
- av_parameter_event.gen.go
- av_speech_synthesis_marker.gen.go
- av_speech_synthesis_provider_audio_unit.gen.go
- av_speech_synthesis_provider_request.gen.go
- av_speech_synthesis_provider_voice.gen.go
- av_speech_synthesis_voice.gen.go
- av_speech_synthesizer.gen.go
- av_speech_utterance.gen.go
- avau_preset_event.gen.go
- avmidi_channel_event.gen.go
- avmidi_channel_pressure_event.gen.go
- avmidi_control_change_event.gen.go
- avmidi_meta_event.gen.go
- avmidi_note_event.gen.go
- avmidi_pitch_bend_event.gen.go
- avmidi_player.gen.go
- avmidi_poly_pressure_event.gen.go
- avmidi_program_change_event.gen.go
- avmidi_sysex_event.gen.go
- blocks.gen.go
- delegate_class_counter.gen.go
- doc.gen.go
- enums.gen.go
- functions.gen.go
- generate.go
- global_vars.gen.go
- speech_synthesizer_delegate_protocol.gen.go
- typedefs.gen.go
- types.gen.go
- undefined_types.gen.go