Documentation
¶
Index ¶
Constants ¶
const (
UserEventSizeLimit = 128 // Maximum byte size for event name and payload
)
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type Config ¶
type Config struct {
// The name of this node. This must be unique in the cluster. If this
// is not set, Serf will set it to the hostname of the running machine.
NodeName string
// The role for this node, if any. This is used to differentiate
// between perhaps different members of a Serf. For example, you might
// have a "load-balancer" role and a "web" role part of the same cluster.
// When new nodes are added, the load balancer wants to know (so it
// must be part of the cluster), but it doesn't want to add other load
// balancers to the rotation, so it checks if the added nodes are "web".
Role string
// EventCh is a channel that receives all the Serf events. The events
// are sent on this channel in proper ordering. Care must be taken that
// this channel doesn't block, either by processing the events quick
// enough or buffering the channel, otherwise it can block state updates
// within Serf itself. If no EventCh is specified, no events will be fired,
// but point-in-time snapshots of members can still be retrieved by
// calling Members on Serf.
EventCh chan<- Event
// BroadcastTimeout is the amount of time to wait for a broadcast
// message to be sent to the cluster. Broadcast messages are used for
// things like leave messages and force remove messages. If this is not
// set, a timeout of 5 seconds will be set.
BroadcastTimeout time.Duration
// The settings below relate to Serf's event coalescence feature. Serf
// is able to coalesce multiple events into single events in order to
// reduce the amount of noise that is sent along the EventCh. For example
// if five nodes quickly join, the EventCh will be sent one EventMemberJoin
// containing the five nodes rather than five individual EventMemberJoin
// events. Coalescence can mitigate potential flapping behavior.
//
// Coalescence is disabled by default and can be enabled by setting
// CoalescePeriod.
//
// CoalescePeriod specifies the time duration to coalesce events.
// For example, if this is set to 5 seconds, then all events received
// within 5 seconds that can be coalesced will be.
//
// QuiescentPeriod specifies the duration of time where if no events
// are received, coalescence immediately happens. For example, if
// CoalscePeriod is set to 10 seconds but QuiscentPeriod is set to 2
// seconds, then the events will be coalesced and dispatched if no
// new events are received within 2 seconds of the last event. Otherwise,
// every event will always be delayed by at least 10 seconds.
CoalescePeriod time.Duration
QuiescentPeriod time.Duration
// The settings below relate to Serf keeping track of recently
// failed/left nodes and attempting reconnects.
//
// ReapInterval is the interval when the reaper runs. If this is not
// set (it is zero), it will be set to a reasonable default.
//
// ReconnectInterval is the interval when we attempt to reconnect
// to failed nodes. If this is not set (it is zero), it will be set
// to a reasonable default.
//
// ReconnectTimeout is the amount of time to attempt to reconnect to
// a failed node before giving up and considering it completely gone.
//
// TombstoneTimeout is the amount of time to keep around nodes
// that gracefully left as tombstones for syncing state with other
// Serf nodes.
ReapInterval time.Duration
ReconnectInterval time.Duration
ReconnectTimeout time.Duration
TombstoneTimeout time.Duration
// QueueDepthWarning is used to generate warning message if the
// number of queued messages to broadcast exceeds this number. This
// is to provide the user feedback if events are being triggered
// faster than they can be disseminated
QueueDepthWarning int
// RecentIntentBuffer is used to set the size of recent join and leave intent
// messages that will be buffered. This is used to guard against
// the case where Serf broadcasts an intent that arrives before the
// Memberlist event. It is important that this not be too small to avoid
// continuous rebroadcasting of dead events.
RecentIntentBuffer int
// EventBuffer is used to control how many events are buffered.
// This is used to prevent re-delivery of events to a client. The buffer
// must be large enough to handle all "recent" events, since Serf will
// not deliver messages that are older than the oldest entry in the buffer.
// Thus if a client is generating too many events, it's possible that the
// buffer gets overrun and messages are not delivered.
EventBuffer int
// MemberlistConfig is the memberlist configuration that Serf will
// use to do the underlying membership management and gossip. Some
// fields in the MemberlistConfig will be overwritten by Serf no
// matter what:
//
// * Name - This will always be set to the same as the NodeName
// in this configuration.
//
// * Events - Serf uses a custom event delegate.
//
// * Delegate - Serf uses a custom delegate.
//
MemberlistConfig *memberlist.Config
// LogOutput is the location to write logs to. If this is not set,
// logs will go to stderr.
LogOutput io.Writer
}
Config is the configuration for creating a Serf instance.
func DefaultConfig ¶
func DefaultConfig() *Config
DefaultConfig returns a Config struct that contains reasonable defaults for most of the configurations.
type Event ¶
Event is a generic interface for exposing Serf events Clients will usually need to use a type switches to get to a more useful type
type EventType ¶
type EventType int
EventType are all the types of events that may occur and be sent along the Serf channel.
type LamportClock ¶
type LamportClock struct {
// contains filtered or unexported fields
}
LamportClock is a thread safe implementation of a lamport clock. It uses efficient atomic operations for all of its functions, falling back to a heavy lock only if there are enough CAS failures.
func (*LamportClock) Increment ¶
func (l *LamportClock) Increment() LamportTime
Increment is used to increment and return the value of the lamport clock
func (*LamportClock) Time ¶
func (l *LamportClock) Time() LamportTime
Time is used to return the current value of the lamport clock
func (*LamportClock) Witness ¶
func (l *LamportClock) Witness(v LamportTime)
Witness is called to update our local clock if necessary after witnessing a clock value received from another process
type Member ¶
type Member struct {
Name string
Addr net.IP
Role string
Status MemberStatus
}
Member is a single member of the Serf cluster.
type MemberEvent ¶
MemberEvent is the struct used for member related events Because Serf coalesces events, an event may contain multiple members.
func (MemberEvent) EventType ¶
func (m MemberEvent) EventType() EventType
func (MemberEvent) String ¶
func (m MemberEvent) String() string
type MemberStatus ¶
type MemberStatus int
MemberStatus is the state that a member is in.
const ( StatusNone MemberStatus = iota StatusAlive StatusLeaving StatusLeft StatusFailed )
func (MemberStatus) String ¶
func (s MemberStatus) String() string
type Serf ¶
type Serf struct {
// contains filtered or unexported fields
}
Serf is a single node that is part of a single cluster that gets events about joins/leaves/failures/etc. It is created with the Create method.
All functions on the Serf structure are safe to call concurrently.
func Create ¶
Create creates a new Serf instance, starting all the background tasks to maintain cluster membership information.
After calling this function, the configuration should no longer be used or modified by the caller.
func (*Serf) Join ¶
Join joins an existing Serf cluster. Returns the number of nodes successfully contacted. The returned error will be non-nil only in the case that no nodes could be contacted.
func (*Serf) RemoveFailedNode ¶
RemoveFailedNode forcibly removes a failed node from the cluster immediately, instead of waiting for the reaper to eventually reclaim it.
func (*Serf) Shutdown ¶
Shutdown forcefully shuts down the Serf instance, stopping all network activity and background maintenance associated with the instance.
This is not a graceful shutdown, and should be preceeded by a call to Leave. Otherwise, other nodes in the cluster will detect this node's exit as a node failure.
It is safe to call this method multiple times.