MindPype Package Contents
Note
It is highly recommended that, for any MindPype objects created, you use the
provided factory methods. This will ensure that the objects are created
correctly and that the correct parameters are passed. For example, to create
a new absolute kernel, use the
mindpype.kernels.AbsoluteKernel.create_absolute_kernel()
method instead
of mindpype.kernels.AbsoluteKernel
constructor.
Core Components
- class mindpype.core.MPBase(mp_type, session)[source]
Bases:
object
This is the base class for all objects used in the MindPype API. It serves to define some attributes that will be shared across all other objects.
- Parameters:
- property session_id
Returns the session id of the object :returns: session_id – ID of the session where the object exists :rtype: int
- class mindpype.core.MPEnums(*values)[source]
Bases:
IntEnum
Defines a class of enums used by MindPype
The following enums are defined and available for use:
Object Type MPEnums - Leading ‘1’ Enum
Value
MPBase
100
SESSION
101
GRAPH
102
NODE
103
KERNEL
104
PARAMETER
105
TENSOR
106
SCALAR
107
ARRAY
108
CIRCLE_BUFFER
109
FILTER
110
SRC
111
CLASSIFIER
112
Parameter Directions - Leading ‘3’ Enum
Value
INPUT
300
OUTPUT
301
INOUT
302
Kernel Initialization types - leading ‘4’ Enum
Value
INIT_FROM_NONE
400
INIT_FROM_DATA
401
INIT_FROM_COPY
402
- class mindpype.core.Session[source]
Bases:
MPBase
Session objects contain all other MindPype objects instances within a data capture session.
Examples
>>> from mindpype.classes import session as S >>> S.session.create()
- add_data(data)[source]
Add a data object to the session
- Parameters:
data (Tensor or Scalar or Array or CircleBuffer) – Data object to add
Examples
>>> session.add_data(data)
- add_ext_out(src)[source]
Add an external outlet to the session
- Parameters:
src (OutputLSLStream, CircleBuffer, or other external outlet) – External outlet to add
- add_ext_src(src)[source]
Add an external source to the session
- Parameters:
src (LSLStream, CircleBuffer, or other external source) – External source to add
- add_graph(graph)[source]
Add a graph to the session
- Parameters:
graph (Graph) – Graph to add to the session
Examples
>>> session.add_graph(graph)
- add_misc_mp_obj(obj)[source]
Add a misc MindPype object to the session
- Parameters:
object (any MindPype) – MindPype object to add
Examples
>>> session.add_misc_mp_obj(obj)
- find_obj(id_num)[source]
Search for and return a MindPype object within the session with a specific ID number
- Parameters:
id_num (int) – ID number of the object to find
- Returns:
MindPype object – MindPype object with the specified ID number
- Return type:
- poll_volatile_channels(label=None)[source]
Update the contents of all volatile data streams
- labelstr, optional
Label for the current trial. The default is None.
>>> session.poll_volatile_channels()
Warning
For Developers: may need to add an input parameter with some timing information to indicate how each data object should be synced
Data Containers
Defines data container classes for MindPype. These classes are used to represent data in the MindPype framework.
- class mindpype.containers.Array(sess, capacity, element_template)[source]
Bases:
MPBase
Array containing instances of other MindPype classes. Each array can only hold one type of MindPype class.
Note
A single array object should only contain one MindPype/data object type.
- Parameters:
sess (Session object) – Session where the Array object will exist
capacity (int) – Maximum number of elements to be stored within the array (for allocation purposes)
element_template (any) – The template MindPype element to populate the array (see examples)
- virtual
If true, the Scalar object is virtual, non-virtual otherwise
- Type:
bool
- volatile
True if source is volatile (needs to be updated/polled between trials), false otherwise
- Type:
bool
- capacity
Max number of elements that can be stored in the array
- Type:
int
- _elements
Elements of the array
- Type:
array
Examples
>>> # Creating An Array of tensors >>> template = Tensor.create(example_session, input_data.shape) >>> example = Array.create(example_session, example_capacity, template)
- Returns:
array
- Return type:
Array Object
- assign_random_data(whole_numbers=False, vmin=0, vmax=1, covariance=False)[source]
Assign random data to the array. This is useful for testing and verification purposes.
- Parameters:
whole_numbers (bool) – Assigns data that is only whole numbers if True
vmin (int) – Lower limit for values in the random data
vmax (int) – Upper limits for values in the random data
covarinace (bool) – If True, assigns random covariance matrix
- property capacity
- copy_to(dest_array)[source]
Copy all the attributes of the array to another array. Note these will reference the same objects within the element list
- Parameters:
dest_array (Array object) – Array object where the attributes with the referenced array will be copied to
Examples
>>> old_array.copy_to(copy_of_old_array)
- classmethod create(sess, capacity, element_template)[source]
Factory method to create array object
- Parameters:
sess (Session object) – Session where the Array object will exist
capacity (int) – Maximum number of elements to be stored within the array (for allocation purposes)
element_template (any) – The template MindPype element to populate the array (see examples)
- get_element(index)[source]
Returns the element at a specific index within an array object.
- Parameters:
index (int) – Index is the position within the array with the element will be returned. Index should be 0 <= Index < Capacity
- Returns:
any
- Return type:
Data object at index index
Examples
>>> example_element = example_array.get_element(0)
- make_copy()[source]
Create and return a deep copy of the array The copied array will maintain references to the same objects. If a copy of these is also desired, they will need to be copied separately.
- Parameters:
None
Examples
>>> new_array = old_array.make_copy()
- property num_elements
- set_element(index, element)[source]
Changes the element at a particular index to a specified value
- Parameters:
index (int) – Index in the array where the element will change. 0 <= Index < capacity
element (any) – specified value which will be set at index index
Examples
>>> example_array.set_element(0, 12) # changes 0th element to 12 >>> print(example_array.get_element(0), example_array.get_element(1)) (12, 5)
Notes
element must be the same type as the other elements within the array.
- class mindpype.containers.CircleBuffer(sess, capacity, element_template)[source]
Bases:
Array
A circular buffer/Array for MindPype/data objects.
- Parameters:
sess (Session object) – Session where the Array object will exist
capacity (int) – Maximum number of elements to be stored within the array (for allocation purposes)
element_template (any) – The template MindPype element to populate the array (see Array examples)
- mp_type
Data source the tensor pushes data to (only applies to Tensors created from a handle)
- Type:
MP Enum
- head
First element of the circle buffer
- Type:
data object
- tail
Last element of the circle buffer
- Type:
data object
- assign_random_data(whole_numbers=False, vmin=0, vmax=1, covariance=False)[source]
Assign random data to the buffer. This is useful for testing and verification purposes.
- copy_to(dest_array)[source]
Copy all the attributes of the circle buffer to another circle buffer
- Parameters:
dest_array (Circle buffer) – Circle buffer to copy attributes to
- classmethod create(sess, capacity, element_template)[source]
Create circle buffer
- Parameters:
sess (Session Object) – Session where graph will exist
capacity (Int) – Capacity of buffer
element_template (any) – The template MindPype element to populate the array (see Array examples)
- Returns:
cb
- Return type:
Circle Buffer
- dequeue()[source]
Dequeue element from circle buffer
- Returns:
ret – MindPype data object at the head of the circle buffer that is removed
- Return type:
data object
- enqueue(obj)[source]
Enqueue an element into circle buffer
- Parameters:
obj (data object) – Object to be added to circle buffer
- enqueue_chunk(cb)[source]
Enqueue a number of elements from another circle buffer into this circle buffer
- Parameters:
cb (Circle Buffer) – Circle buffer to enqueue into the other Circle buffer
- get_queued_element(index)[source]
Returns the element at a specific index within an Circle Buffer object.
- Parameters:
index (int) – Index is the position within the array with the element will be returned. Index should be 0 <= Index < Capacity
- Returns:
any
- Return type:
Data object at index index
Examples
>>> example_element = example_circle_buffer.get_element(0)
- is_empty()[source]
Checks if circle buffer is empty
- Returns:
bool
- Return type:
True if circle buffer is empty, false otherwise
Examples
>>> is_empty = example_buffer.is_empty() >>> print(is_empty)
True
- is_full()[source]
Checks if circle buffer is full
- Returns:
bool
- Return type:
True if circle buffer is empty, false otherwise
Examples
>>> is_empty = example_buffer.is_empty() >>> print(is_empty)
True
- make_copy()[source]
Create and return a deep copy of the Circle Buffer The copied Circle Buffer will maintain references to the same objects. If a copy of these is also desired, they will need to be copied separately.
- Returns:
cpy – Copy of the circle buffer
- Return type:
Circle Buffer
- property num_elements
Return the number of elements currently in the buffer.
- Parameters:
None
- Returns:
int
- Return type:
Number of elements currently in the buffer
Examples
>>> example_num_elements = example_buffer.num_elements()
- class mindpype.containers.Scalar(sess, value_type, val, is_virtual, ext_src, ext_out=None)[source]
Bases:
MPBase
MPBase Data type defining scalar-type data. The valid data types are int, float, complex, str, and bool.
- Parameters:
sess (Session Object) – Session where the Scalar object will exist
value_type (one of [int, float, complex, str, bool]) – Indicates the type of data represented by the Scalar
val (value of type int, float, complex, str, or bool) – Data value represented by the Scalar object
is_virtual (bool) – If true, the Scalar object is virtual, non-virtual otherwise
ext_src (LSL data source input object, MAT data source, or None) – External data source represented by the scalar; this data will be polled/updated when trials are executed. If the data does not represent an external data source, set ext_src to None
- data_type
Indicates the type of data represented by the Scalar
- Type:
one of [int, float, complex, str, bool]
- data
Data value represented by the Scalar object
- Type:
value of type int, float, complex, str, or bool
- is_virtual
If true, the Scalar object is virtual, non-virtual otherwise
- Type:
bool
- ext_src
External data source represented by the scalar; this data will be polled/updated when trials are executed. If the data does not represent an external data source, set ext_src to None
- Type:
LSL data source input object, MAT data source, or None
- volatile
True if source is volatile (needs to be updated/polled between trials), false otherwise
- Type:
bool
Examples
>>> example_scalar = Scalar.create_from_value(example_session, 5)
- assign_random_data(whole_numbers=False, vmin=0, vmax=1, covariance=False)[source]
Assign random data to the scalar. This is useful for testing and verification purposes.
- Parameters:
whole_numbers (bool) – Assigns data that is only whole numbers if True
vmin (int) – Lower limit for values in the random data
vmax (int) – Upper limits for values in the random data
covarinace (bool) – If True, assigns random covariance matrix
- copy_to(dest_scalar)[source]
Copy all the elements of the scalar to another scalar
- Parameters:
dest_scalar (Scalar Object) – Scalar object which will represent the copy of the referenced Scalar’s elements
Examples
>>> example_scalar.copy_to(copy_of_example_scalar)
- classmethod create(sess, data_type)[source]
Initialize a non-virtual, non-volatile Scalar object with an empty data field and add it to the session
- Parameters:
sess (Session Object) – Session where the Scalar object will exist
data_type (int, float, complex, str, or bool) – Data type of data represented by Scalar object
- Return type:
Examples
>>> new_scalar = Scalar.create(sess, int) >>> new_scalar.data = 5
- classmethod create_from_source(sess, data_type, src)[source]
Initialize a non-virtual, volatile Scalar object with an empty data field and add it to the session
- Parameters:
sess (Session Object) – Session where the Scalar object will exist
data_type (int, float, complex, str, or bool) – Data type of data represented by Scalar object
src (Data Source object) – Data source object (LSL, continuousMat, or epochedMat) from which to poll data
- Return type:
Examples
>>> new_scalar = Scalar.create_from_source(sess, int, src)
- classmethod create_from_value(sess, value)[source]
Initialize a non-virtual, non-volatile Scalar object with specified data and add it to the session
- Parameters:
sess (Session Object) – Session where the Scalar object will exist
value (Value of type int, float, complex, str, or bool) – Data represented by Scalar object
- Return type:
Examples
>>> new_scalar = Scalar.create_from_value(sess, 5) >>> print(new_scalar.data) 5
- classmethod create_virtual(sess, data_type)[source]
Initialize a virtual, non-volatile Scalar object with an empty data field and add it to the session
- Parameters:
sess (Session Object) – Session where the Scalar object will exist
data_type (int, float, complex, str, or bool) – Data type of data represented by Scalar object
- Return type:
Examples
>>> new_scalar = Scalar.create_virtual(sess, int) >>> new_scalar.data = 5
- property data
Getter for scalar data attribute
- Returns:
Data value represented by the Scalar object
- Return type:
int, float, complex, str, or bool
- make_copy()[source]
Produce and return a deep copy of the scalar
- Returns:
Deep copy of referenced parameter
- Return type:
Examples
>>> new_scalar = example_scalar.make_copy() >>> print(new_scalar.data)
12
- class mindpype.containers.Tensor(sess, shape, data, is_virtual, ext_src, ext_out=None)[source]
Bases:
MPBase
Tensor (or n-dimensional matrices), are defined by the tensor class. MindPype tensors can either be volatile (are updated/change each trial, generally reserved for tensors containing current trial data), virtual (empty, dimensionless tensor object). Like scalars and array, tensors can be created from data, copied from a different variable, or created virtually, so they don’t initially contain a value. Each of the scalars, tensors and array data containers also have an external source (_ext_src) attribute, which indicates, if necessary, the source from which the data is being pulled from. This is especially important if trial/training data is loaded into a tensor each trial from an LSL stream or MAT file.
- Parameters:
sess (Session object) – Session where Tensor will exist
shape (shape_like) – Shape of the Tensor
data (ndarray) – Data to be stored within the array
is_virtual (bool) – If False, the Tensor is non-virtual, if True, the Tensor is virtual
ext_src (input Source) – Data source the tensor pulls data from (only applies to Tensors created from a handle)
ext_out (output Source) – Data source the tensor pushes data to (only applies to Tensors created from a handle)
- shape
Shape of the data
- Type:
tuple
- virtual
If true, the Scalar object is virtual, non-virtual otherwise
- Type:
bool
- ext_src
External data source represented by the scalar; this data will be polled/updated when trials are executed. If the data does not represent an external data source, set ext_src to None
- Type:
LSL data source input object, MAT data source, or None
- ext_out
Data source the tensor pushes data to (only applies to Tensors created from a handle)
- Type:
output Source
- data
Data value represented by the Scalar object
- Type:
value of type int, float, complex, str, or bool
- volatile
True if source is volatile (needs to be updated/polled between trials), false otherwise
- Type:
bool
- assign_random_data(whole_numbers=False, vmin=0, vmax=1, covariance=False)[source]
Assign random data to the tensor. This is useful for testing and verification purposes.
- Parameters:
whole_numbers (bool) – Assigns data that is only whole numbers if True
vmin (int) – Lower limit for values in the random data
vmax (int) – Upper limits for values in the random data
covarinace (bool) – If True, assigns random covariance matrix
- copy_to(dest_tensor)[source]
Copy the attributes of the tensor to another tensor object
- Parameters:
dest_tensor (Tensor object) – Tensor object where the attributes with the referenced Tensor will be copied to
- classmethod create(sess, shape)[source]
Factory Method to create a generic, non-virtual, Tensor object. The shape must be known to create this object
- Parameters:
sess (Session object) – Session where Tensor will exist
shape (shape_like) – Shape of the Tensor
- classmethod create_from_data(sess, data)[source]
Factory method to create a Tensor from data
- Parameters:
sess (Session object) – Session where Tensor will exist
data (ndarray) – Data to be stored within the array
- classmethod create_from_handle(sess, shape, src)[source]
Factory method to create a Tensor from a handle/external source
- Parameters:
sess (Session object) – Session where Tensor will exist
shape (shape_like) – Shape of the Tensor
ext_src (input Source) – Data source the tensor pulls data from (only applies to Tensors created from a handle)
- classmethod create_virtual(sess, shape=())[source]
Factory method to create a virtual Tensor
- Parameters:
sess (Session object) – Session where Tensor will exist
shape (shape_like, default = ()) – Shape of the Tensor, can be changed for virtual tensors
- property data
Getter for Tensor data
- Returns:
Data stored in Tensor
- Return type:
ndarray
Examples
>>> print(tensor.data)
- make_copy()[source]
Create and return a deep copy of the tensor
- Returns:
Deep copy of the Tensor object
- Return type:
Tensor object
Examples
>>> t = Tensor.create_virtual((1, 2, 3)) >>> t2 = t.make_copy()
- poll_volatile_data(label=None)[source]
Pull data from external sources or MindPype input data sources.
- Parameters:
Label (int, default = None) – Class label corresponding to class data to poll.
- property shape
Graphs - The Processing Pipelines
Created on Mon Dec 2 12:00:43 2019
graph.py - Defines the graph object
- class mindpype.graph.Graph(sess)[source]
Bases:
MPBase
This class represents the data processing flow graph, or processing pipelines. Individual nodes, or processing steps, are added to the graph to create the pipeline.
- Parameters:
sess (Session Object) – Session where the graph will exist
- _verified
True is graph has been verified, false otherwise
- Type:
bool
- _sess
Session where the Graph object exists
- Type:
Session object
- _volatile_sources
Data sources within this array will be polled/executed when the graph is executed.
- Type:
List of Sources
- _volatile_outputs
Data outputs within this array will push to external sources when the graph is executed.
- Type:
List of data Outputs
- add_node(node)[source]
Append a node object to the list of nodes
- Parameters:
node (Node object) – Adds the specified Node object to the referenced graph
- cross_validate(target_validation_output, folds=5, shuffle=False, random_state=None, statistic='accuracy')[source]
Perform cross validation on the graph or a portion of the graph.
- Parameters:
target_validation_output (data container) – MindPype container (Tensor, Scalar, etc.) containing the target validation output. Likely, this will be the output of a classification node.
folds (int, default = 5) – Number of folds to use for cross validation.
shuffle (bool, default = False) – Whether to shuffle the data before splitting into folds.
random_state (int, default = None) – Random state to use for shuffling the data.
statistic (str, default = 'accuracy') – Statistic to use for cross validation. Options include ‘accuracy’, ‘f1’, ‘precision’, ‘recall’, and ‘cross_entropy’.
- Returns:
mean_stat – Average score for the specified statistic (accuracy, f1, etc.)
- Return type:
float
- execute(label=None)[source]
Execute the graph by iterating over all the nodes within the graph and executing each one
- Parameters:
Label (int, default = None) –
If the trial label is known, it can be passed when a trial is
executed. This is required for class-separated input data * If the trial label is not known, it will be polled from the data source
- initialize(default_init_data=None, default_init_labels=None)[source]
Initialize each node within the graph for trial execution
- set_default_init_data(data, labels)[source]
Add default initialization data to the graph. If a node requires initialization data and it is not explicitly provided, this data will be used. It will be added as initialization data to any root nodes that ingest data from outside of the graph. :param data: Tensor or array containing the default initialization data :type data: Tensor or Array :param labels: Tensor or array containing the default initialization labels :type labels: Tensor or Array
- class mindpype.graph.Node(graph, kernel, params)[source]
Bases:
MPBase
Generic node object containing a kernel function
- Parameters:
graph (Graph object) – Graph where the Node object will exist
kernel (Kernel Object) – Kernel object to be used for processing within the Node
params (dict) – Dictionary of parameters outputted by kernel
- kernel
Kernel object to be used for processing within the Node
- Type:
Kernel Object
- _params
Dictionary of parameters outputted by kernel
- Type:
dict
Examples
>>> Node.create(example_graph, example_kernel, example_params)
- add_initialization_data(init_data, init_labels=None)[source]
Add initialization data to the node
- Parameters:
init_data (list or tuple of data objects) – MindPype container containing the initialization data
init_labels (data object containing initialization)
labels – MindPype container containing the initialization labels
None (default =) – MindPype container containing the initialization labels
- extract_inputs()[source]
Return a list of all the node’s inputs
- Parameters:
None
- Returns:
List of inputs for the Node
- Return type:
List of Nodes
Examples
>>> inputs = example_node.extract_inputs() >>> print(inputs)
None
- class mindpype.graph.Parameter(data, direction)[source]
Bases:
object
Parameter class can be used to abstract data types as inputs and outputs to nodes.
- Parameters:
data (any) – Reference to the data object represented by the parameter object
direction ([MPEnums.INPUT, MPEnums.OUTPUT]) – Enum indicating whether this is an input-type or output-type parameter
External Data Sources
- Currently supported sources:
Lab Streaming Layer
xdf files
- class mindpype.source.InputLSLStream(sess, pred=None, channels=None, relative_start=0, marker_coupled=True, marker_fmt=None, marker_pred=None, stream_info=None, marker_stream_info=None, active=True, interval=None, Ns=1)[source]
Bases:
MPBase
An object for maintaining an LSL inlet
- data_buffer
{‘Data’: np.array, ‘time_stamps’: np.array} A dictionary containing the data and time stamps from past samples (used when trials have overlapping data)
- Type:
dict
- data_inlet
The LSL inlet object
- Type:
pylsl.StreamInlet
- marker_inlet
The LSL inlet object for the marker stream
- Type:
pylsl.StreamInlet
- marker_pattern
The regular expression pattern for the marker stream. Use “task1$|task2$|task3$” if task1, task2, and task3 are the markers
- Type:
re.Pattern
- channels
Index value of channels to poll from the stream, if None all channels will be polled.
- Type:
tuple of ints
- TODO
- Type:
update attributes docstring
- MAX_NULL_READS = 1000
- classmethod create_marker_coupled_data_stream(sess, pred=None, channels=None, relative_start=0, marker_fmt=None, marker_pred="type='Markers'", stream_info=None, marker_stream_info=None, Ns=1, active=True)[source]
Create a LSLStream data object that maintains a data stream and a marker stream
- Parameters:
sess (session object) – Session object where the data source will exist
pred (str) – The predicate string, e.g. “name=’BioSemi’” or “type=’EEG’ and starts-with(name, ‘BioSemi’) and count(description/desc/channels/channel)=32”
channels (tuple or list of ints) – Index value of channels to poll from the stream, if None all channels will be polled
marker_fmt (str) – Regular expression template of the marker to be matched, if none all markers will be matched
marker_pred (str) – Predicate string to match the marker stream, if None all streams will be matched
stream_info (StreamInfo object) – StreamInfo object to use for the data stream, if None a default StreamInfo object will be created
Ns (int, default = 1) – Number of samples to be extracted per poll.
- classmethod create_marker_uncoupled_data_stream(sess, pred=None, channels=None, relative_start=0, active=True, interval=None, Ns=1)[source]
Create a LSLStream data object that maintains only a data stream with no associated marker stream :param sess: Session object where the data source will exist :type sess: session object :param pred: The predicate string, e.g. “name=’BioSemi’” or “type=’EEG’ and starts-with(name, ‘BioSemi’) and
count(description/desc/channels/channel)=32”
- Parameters:
channels (tuple or list of ints) – Index value of channels to poll from the stream, if None all channels will be polled
active (bool) – Flag to indicate whether the stream is active or will be activated in the future
interval (float) – The minimum interval at which the stream will be polled
Ns (int, default = 1) – Number of samples to be extracted per poll.
- last_marker()[source]
Get the last marker in the marker stream
- Returns:
marker – The last marker string
- Return type:
str
- peek_marker()[source]
Peek at the next marker in the marker stream
- Returns:
marker – The marker string
- Return type:
str
- poll_data(label=None)[source]
Pull data from the inlet stream until we have Ns data points for each channel.
- Parameters:
Ns (int) – number of samples to collect
Label (None) – used for file-based polling, not used here
- update_input_streams(pred=None, channels=None, marker_coupled=True, marker_fmt=None, marker_pred=None, stream_info=None, marker_stream_info=None, Ns=1)[source]
Update the input stream with new parameters
- Parameters:
pred (str) – The predicate string, e.g. “name=’BioSemi’” or “type=’EEG’ and starts-with(name, ‘BioSemi’) and count(description/desc/channels/channel)=32”
channels (tuple of ints) – Index value of channels to poll from the stream, if None all channels will be polled
marker_coupled (bool) – true if there is an associated marker to indicate relative time where data should begin to be polled
marker_fmt (Regex or list) – Regular expression template of the marker to be matched, if none all markers will be matched. Alternatively, a list of markers can be provided.
marker_pred (str) – The predicate string for the marker stream
stream_info (pylsl.StreamInfo) – The stream info object for the stream can be passed instead of the predicate to avoid the need to resolve the stream
marker_stream_info (pylsl.StreamInfo) – The stream info object for the marker stream can be passed instead of the predicate to avoid the need to resolve the stream
Ns (int, default = 1) – The number of samples to be extracted per poll.
- class mindpype.source.InputXDFFile(sess, files, channels, tasks=None, relative_start=0, Ns=1, stype='EEG', mode='epoched')[source]
Bases:
MPBase
Utility class for extracting trial data from an XDF file for MindPype.
- Parameters:
sess (Session Object) – Session where the MPXDF data source will exist.
files (list of str) – XDF file(s) where data should be extracted from.
tasks (list or tuple of strings) – List or Tuple of strings corresponding to the tasks to be completed by the user. For example, the tasks ‘target’ and ‘non-target’/’flash’ can be used for P300-type setups.
channels (list or tuple of int) – Values corresponding to the stream channels used during the session
relative_start (float, default = 0) – Value corresponding to the start of the trial relative to the marker onset.
Ns (int, default = 1) – Number of samples to be extracted per trial. For epoched data, this value determines the size of each epoch, whereas this value is used in polling for continuous data.
mode ('continuous', 'class-separated' or 'epoched', default = 'epoched') – Mode indicates whether the inputted data will be epoched sequentially as individual trials, epoched by class, or to leave the data in a continuous format
warning:: (..) – The task list used in the InputXDFFile object MUST REFLECT the task list used in the XDF file. Differences will cause the program to fail.
note:: (..) –
There are 3 types of modes for the MPXDF object: ‘continuous’, ‘class-separated’ and ‘epoched’. Continuous mode will leave the data in a continuous format, and will poll the data for the next Ns samples each time the poll_data method is called. Class-separated mode will epoch the data by class, and will poll the data for the next Ns samples of the specified class each time the poll_data method is called. Epoched mode will epoch the data sequentially, and will poll the data for the next Ns samples of the next trial (Ns < length of the epoch) each time the poll_data method is called.
For P300/MI paradigms, where there are specified task names (i.e. ‘target’ and ‘non-target’/’flash’, etc.), class-separated mode is recommended. For other paradigms, where there are no specified task names, and data will be polled sequentially, either continuous or epoched mode is recommended.
Class-separated mode will store the data in a dictionary with the following format:
self.trial_data = { "Data": {"time_series": {task_name1: np.array([Nt x Nc x Ns]), task_name2: np.array([Nt x Nc x Ns]),}, "time_stamps": np.array([Ns])}}, "Markers": {"time_series": np.array([Ns]), "time_stamps": np.array([Ns])}, }
Continuous mode will store the data in a dictionary with the following format:
self.trial_data = { "Data": {"time_series": np.array([Nc x Ns]), "time_stamps": np.array([Ns])}, "Markers": {"time_series": np.array([Ns]), "time_stamps": np.array([Ns])},
Epoched mode will store the data in a dictionary with the following format:
self.trial_data = { "Data": {"time_series": np.array([Nt x Nc x Ns]), "time_stamps": np.array([Ns])}, "Markers": {"time_series": np.array([Ns]), "time_stamps": np.array([Ns])}, }
- files
XDF file(s) where data should be extracted from.
- Type:
list of str
- relative_start
Value corresponding to the start of the trial relative to the marker onset.
- Type:
float, default = 0
- Ns
Number of samples to be extracted per trial. For epoched data, this value determines the size of each epoch, whereas this value is used in polling for continuous data.
- Type:
int, default = 1
- tasks
List or Tuple of strings corresponding to the tasks to be completed by the user. For example, the tasks ‘target’ and ‘non-target’/’flash’ can be used for P300-type setups.
- Type:
list or tuple of strings
- channels
Values corresponding to the stream channels used during the session
- Type:
list or tuple of int
- mode
Mode indicates whether the inputted data will be epoched sequentially as individual trials, epoched by class, or to leave the data in a continuous format
- Type:
‘continuous’, ‘class-separated’ or ‘epoched’, default = ‘epoched’
- stream_type
Type of stream (Data or Markers)
- Type:
str
- classmethod create_continuous(sess, files, channels, tasks=None, relative_start=0, Ns=1)[source]
Factory Method for creating continuous XDF File input source.
- Parameters:
sess (Session Object) – Session where the MPXDF data source will exist.
files (list of str) – XDF file(s) where data should be extracted from.
tasks (list or tuple of strings (default = None)) – List or Tuple of strings corresponding to the tasks to be completed by the user. For P300-type setups, the tasks ‘target’ and ‘non-target’/’flash’ can be used. If None, the tasks will be inferred from the marker stream. This is only supported for P300 data recorded using Mindset.
channels (list or tuple of int) – Values corresponding to the data stream channels used during the session
relative_start (float, default = 0) – Value corresponding to the start of the trial relative to the marker onset.
Ns (int, default = 1) – Number of samples to be extracted per trial. For epoched data, this value determines the size of each epoch, whereas this value is used in polling for continuous data.
- Returns:
src – Continous XDF file input source
- Return type:
- classmethod create_epoched(sess, files, channels, tasks=None, relative_start=0, Ns=1, stype='EEG')[source]
Factory Method for creating epoched XDF File input source.
- Parameters:
sess (Session Object) – Session where the MPXDF data source will exist.
files (list of str) – XDF file(s) where data should be extracted from.
tasks (list or tuple of strings (default = None)) – List or Tuple of strings corresponding to the tasks to be completed by the user. For P300-type setups, the tasks ‘target’ and ‘non-target’/’flash’ can be used. If None, the tasks will be inferred from the marker stream. This is only supported for P300 data recorded using Mindset.
channels (list or tuple of int) – Values corresponding to the data stream channels used during the session
relative_start (float, default = 0) – Value corresponding to the start of the trial relative to the marker onset.
stype (str, default = EEG) – String indicating the data type
Ns (int, default = 1) – Number of samples to be extracted per trial. For class-separated data, this value determines the size of each epoch, whereas this value is used in polling for continuous data.
- Returns:
src – Epoched XDF file input source
- Return type:
- load_into_tensors(include_timestamps=False)[source]
Loads entirity of InputXDFFile data object into a tensor. Returns 2-4 MindPype Tensor objects, in the following order.
Tensor containing the Stream data
Tensor containing the Marker data
Tensor containing the Stream timestamps (if continuous data and include_timestamps is True)
Tensor containing the Marker timestamps (if continuous data and include_timestamps is True)
- Parameters:
include_timestamps (bool, default = False) – If True, the function will return the Marker timestamps as well as the data. Only applicable for continuous data.
- Returns:
data (Tensor) – Tensor containing the stream data
labels (Tensor) – Tensor containing the numerical encoded markers
data_ts (Tensor) – Tensor containing the stream timestamps
labels_ts (Tensor) – Tensor containing the Marker timestamps
- class mindpype.source.OutputLSLStream(sess, stream_info, filesave=None, chunk_size=0, max_buffer=360)[source]
Bases:
MPBase
An object for maintaining an LSL outlet
- check_status(filesave)[source]
TODO: add description :param filesave: :type filesave: TODO - add type
- classmethod create_outlet(sess, name='untitled', type='', channel_count=1, nominal_srate=0.0, channel_format=1, source_id='', filesave=None)[source]
Factory Method to create an OutletLSLStream mindpype object from scratch.
- Parameters:
sess (session object) – Session object where the data source will exist
name (str, default = 'untitled') –
Name of the stream.
Describes the device (or product series) that this stream makes available.
str (type) –
Content type of the stream.
By convention LSL uses the content types defined in the XDF file format specification where applicable.
'' (default =) –
Content type of the stream.
By convention LSL uses the content types defined in the XDF file format specification where applicable.
channel_count (int, default = 1) –
Number of channels per sample. This stays constant for the lifetime of the stream.
nominal_srate (float, default = 0.0) –
The sampling rate (in Hz) as advertised by the data source.
channel_format (int or str, default = 1) –
Format/type of each channel (ie. ‘float32’).
source_id (str, default = '') –
Unique identifier of the device or source of the data, if available (such as the serial number).
This is critical for system robustness since it allows recipients to recover from failure even after the serving app, device or computer crashes (just by finding a stream with the same source id on the network again).
filesave (str, default = None) – If not None, the data will be saved to the given file.
- Returns:
src – Output LSL Stream
- Return type:
Kernel Base Class
- class mindpype.kernel.Kernel(name, init_style, graph)[source]
Bases:
MPBase
,ABC
An abstract base class for kernels. Only used by developers loooking to extend the library.
- Parameters:
name (str) – Name of the kernel
init_style (MPEnums Object) – Kernel initialization style, according to MPEnum class
- name
Name of the kernel
- Type:
str
- init_style
Kernel initialization style, according to BcipEnum class
- Type:
MPEnums Object
- add_initialization_data(init_inputs, init_labels=None)[source]
Add initialization data to the kernel
- Parameters:
init_data (list or tuple of data objects) – MindPype container containing the initialization data
init_labels (data object containing initialization)
labels – MindPype container containing the initialization labels
None (default =) – MindPype container containing the initialization labels
- add_phony_init_input(ph_input, index)[source]
Adds a phony initialization input to the kernel
- Parameters:
ph_input (Object) – The input to be added
index (int) – The index at which the input is to be added
- add_phony_init_output(ph_output, index)[source]
Adds a phony initialization output to the kernel
- Parameters:
ph_output (Object) – The output to be added
index (int) – The index at which the output is to be added
- add_phony_input(ph_input, index)[source]
Adds a phony input to the kernel
- Parameters:
ph_input (Object) – The input to be added
index (int) – The index at which the input is to be added
- add_phony_output(ph_output, index)[source]
Adds a phony output to the kernel
- Parameters:
ph_output (Object) – The output to be added
index (int) – The index at which the output is to be added
- get_init_input(index)[source]
Returns the input at the specified index
- Parameters:
index (int) – The index of the input to be returned
- Returns:
_init_inputs[index] – The input at the specified index
- Return type:
Object
- get_init_output(index)[source]
Returns the output at the specified index
- Parameters:
index (int) – The index of the output to be returned
- Returns:
_init_outputs[index] – The output at the specified index
- Return type:
Object
- get_input(index)[source]
Returns the input at the specified index
- Parameters:
index (int) – The index of the input to be returned
- Returns:
_inputs[index] – The input at the specified index
- Return type:
Object
- get_output(index)[source]
Returns the output at the specified index
- Parameters:
index (int) – The index of the output to be returned
- Returns:
_outputs[index] – The output at the specified index
- Return type:
Object
- property init_input_labels
Returns the labels for the initialization inputs
- Returns:
_init_input_labels – The labels for the initialization inputs
- Return type:
list
- property init_inputs
Returns the inputs of the kernel
- Returns:
_init_inputs – The inputs of the kernel
- Return type:
list
- property init_output_labels
Returns the labels for the initialization outputs
- Returns:
_init_output_labels – The labels for the initialization outputs
- Return type:
list
- property init_outputs
Returns the outputs of the kernel
- Returns:
_init_outputs – The outputs of the kernel
- Return type:
list
- property inputs
Returns the inputs of the kernel
- Returns:
_inputs – The inputs of the kernel
- Return type:
list
- is_covariance_input(param)[source]
Returns true if the parameter is a covariance input
- Parameters:
param (data object) – data object to be check if is a covariance matrix
- Returns:
bool
- Return type:
True if the data object is a covariance matrix, False otherwise
- property outputs
Returns the outputs of the kernel
- Returns:
_outputs – The outputs of the kernel
- Return type:
list
- property phony_init_input_labels
Returns the labels for the phony initialization inputs
- Returns:
_phony_init_input_labels – The labels for the initialization inputs
- Return type:
list
- property phony_init_inputs
Returns the phony inputs of the kernel
- Returns:
_phony_init_inputs – The phony inputs of the kernel
- Return type:
list
- property phony_init_output_labels
Returns the labels for the phony initialization outputs
- Returns:
_phony_init_output_labels – The labels for the initialization outputs
- Return type:
list
- property phony_init_outputs
Returns the phony outputs of the kernel
- Returns:
_phony_init_outputs – The phony outputs of the kernel
- Return type:
list
- property phony_inputs
Returns the phony inputs of the kernel
- Returns:
_phony_inputs – The phony inputs of the kernel
- Return type:
list
- property phony_outputs
Returns the phony outputs of the kernel
- Returns:
_phony_outputs – The phony outputs of the kernel
- Return type:
list
MindPype Classifier Objects
Use this class to create a classifier object that can be used by the MindPype Classifier kernel.
Note
MindPype Classifier objects must be created in order to be used by the MindPype Classifier kernel.
- class mindpype.classifier.Classifier(sess, ctype, classifier)[source]
Bases:
MPBase
A classifier that can be used by different MindPype kernels
- Parameters:
MPBase (MPBase) – The base class for all MindPype objects
sess (Session) – Session where the Array object will exist
ctype (str) – The name of the classifier to be created
classifier (Classifier) – The classifier object to be used within the node (should be the return from a MindPype kernel)
- ctype
One of [‘lda’, ‘svm’, ‘logistic regression’, ‘custom’], corresponding to the type of classifier
- Type:
str
- classifier
The Classifier object (ie. Scipy classifier object) that will dictate this node’s function
- Type:
Examples
from mindpype import Classifier # Create a MindPype Classifier object using the factory method classifier_object = Classifier.create_LDA(sess, solver='svd', shrinkage=None, priors=None, n_components=None, store_covariance=False, tol=0.0001)
- Returns:
MindPype Classifier object – The MindPype Classifier object that can be used by the MindPype Classifier kernel
- Return type:
- classmethod create_LDA(sess, solver='svd', shrinkage=None, priors=None, n_components=None, store_covariance=False, tol=0.0001, covariance_estimator=None)[source]
Factory method to create an LDA MindPype Classifier object.
Note
This is simply a wrapper for the sklearn LDA object.
Note
This method utilizes the
LinearDiscriminantAnalysis
class from thesklearn
package.- Parameters:
sess (Session) – Session where the LDA MindPype Classifier object will exist
- Returns:
MindPype Classifier – MindPype Classifier Object containing the LDA classifier
- Return type:
Examples
>>> from mindpype import Classifier >>> classifier_object = Classifier.create_LDA(sess)
- classmethod create_SVM(sess, C=1, kernel='rbf', degree=3, gamma='scale', coef0=0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=-1, decision_function_shape='ovr', break_ties=False, random_state=None)[source]
Factory Method to create an SVM MindPype Classifier object.
Note
This is simply a wrapper for the sklearn SVC object.
This method utilizes the
SVC
class from thesklearn
package.- Parameters:
sess (session object) – Session where the SVM MindPype Classifier object will exist
Examples
>>> from mindpype import Classifier >>> classifier_object = Classifier.create_SVM(sess)
- Returns:
MindPype Classifier Object – MindPype Classifier Object containing the SVM classifier
- Return type:
- classmethod create_custom_classifier(sess, classifier_object, classifier_type)[source]
Factory method to create a generic MindPype Classifier object.
- Parameters:
sess (Session) – The MindPype Session object to which the classifier will be added.
classifier_object (Sklearn Classifier object) – The classifier object to be added to the MindPype Session.
classifier_type (str) – The type of classifier to be added to the MindPype Session.
- Returns:
Classifier Object – MindPype Classifier object that contains the classifier object and type.
- Return type:
Examples
>>> from mindpype import Classifier >>> from sklearn.svm import SVC >>> svm_object = SVC() >>> classifier_object = Classifier.create_custom_classifier(sess, svm_object, 'svm')
- classmethod create_logistic_regression(sess, penalty='l2', dual=False, tol=0.0001, C=1.0, fit_intercept=True, intercept_scaling=1, class_weight=None, random_state=None, solver='lbfgs', max_iter=100, multi_class='auto', verbose=0, warm_start=False, n_jobs=None, l1_ratio=None)[source]
Note
This is simply a wrapper for the sklearn Logistic Regression object.
Note
This method utilizes the
LogisticRegression
class from thesklearn
package.- Parameters:
sess (session object) – Session where the Logistic Regression MindPype Classifier object will exist
- Returns:
MindPype Classifier Object – MindPype Classifier Object containing the Logistic Regression classifier
- Return type:
Examples
>>> from mindpype import Classifier >>> classifier_object = Classifier.create_logistic_regression(sess)
- ctypes = ['lda', 'svm', 'logistic regression', 'custom']
MindPype Filter Objects
Created on Thu Nov 21 10:51:07 2019
filter.py - Defines the filter Class for MindPype
- class mindpype.filter.Filter(sess, ftype, btype, implementation, crit_frqs, fs, coeffs)[source]
Bases:
MPBase
A filter that can be used by different MindPype kernels
- ftype
The type of filter. Can be one of ‘butter’, ‘cheby1’, ‘cheby2’, ‘ellip’, ‘bessel’
- Type:
str, default ‘butter’
- btype
The type of filter. Can be one of ‘lowpass’, ‘highpass’, ‘bandpass’, ‘bandstop’
- Type:
str, default ‘lowpass’
- implementation
The type of filter. Can be one of ‘ba’, ‘zpk’, ‘sos’
- Type:
str, default ‘ba’
- fs
The sampling frequency of the filter
- Type:
float, default 1.0
- crit_frqs
The critical frequencies of the filter. For lowpass and highpass filters, Wn is a scalar; for bandpass and bandstop filters, Wn is a length-2 sequence.
- Type:
array_like of floats
- coeffs
The filter coefficients. The coefficients depend on the filter type and implementation. See scipy.signal documentation for more details.
- Type:
array_like of floats
- btypes = ['lowpass', 'highpass', 'bandpass', 'bandstop']
- classmethod create_bessel(sess, N, Wn, btype='lowpass', implementation='ba', norm='phase', fs=1.0)[source]
Factory method to create a Bessel MindPype filter object
- Parameters:
N (int) – The order of the filter.
Wn (array_like) – A scalar or length-2 sequence giving the critical frequencies (defined by the norm parameter). For analog filters, Wn is an angular frequency (e.g., rad/s). For digital filters, Wn are in the same units as fs. By default, fs is 2 half-cycles/sample, so these are normalized from 0 to 1, where 1 is the Nyquist frequency. (Wn is thus in half-cycles / sample.)
btype ({'lowpass', 'highpass', 'bandpass', 'bandstop'}, optional) – The type of filter. Default is ‘lowpass’.
analog (bool, optional) – When True, return an analog filter, otherwise a digital filter is returned. (See Notes.)
output ({'ba', 'zpk', 'sos'}, optional) – Type of output: numerator/denominator (‘ba’), pole-zero (‘zpk’), or second-order sections (‘sos’). Default is ‘ba’.
norm ({'phase', 'delay', 'mag'}, optional) –
- Critical frequency normalization:
- phase
The filter is normalized such that the phase response reaches its midpoint at angular (e.g. rad/s) frequency Wn. This happens for both low-pass and high-pass filters, so this is the “phase-matched” case. The magnitude response asymptotes are the same as a Butterworth filter of the same order with a cutoff of Wn. This is the default, and matches MATLAB’s implementation.
- delay
The filter is normalized such that the group delay in the passband is 1/Wn (e.g., seconds). This is the “natural” type obtained by solving Bessel polynomials.
- mag
The filter is normalized such that the gain magnitude is -3 dB at angular frequency Wn.
fs (float, optional) – The sampling frequency of the digital system.
- Returns:
MindPype Filter object – The filter object containing the filter and its parameters
- Return type:
- classmethod create_butter(sess, N, Wn, btype='lowpass', implementation='ba', fs=1.0)[source]
Factory method to create a butterworth MindPype filter object
Butterworth digital and analog filter design.
Design an Nth-order digital or analog Butterworth filter and return the filter coefficients.
- Parameters:
N (int) – The order of the filter.
Wn (array_like) – The critical frequency or frequencies. For lowpass and highpass filters, Wn is a scalar; for bandpass and bandstop filters, Wn is a length-2 sequence. For a Butterworth filter, this is the point at which the gain drops to 1/sqrt(2) that of the passband (the “-3 dB point”). For digital filters, if fs is not specified, Wn units are normalized from 0 to 1, where 1 is the Nyquist frequency (Wn is thus in half cycles / sample and defined as 2*critical frequencies / fs). If fs is specified, Wn is in the same units as fs. For analog filters, Wn is an angular frequency (e.g. rad/s).
btype ({'lowpass', 'highpass', 'bandpass', 'bandstop'},)
default (lowpass) – The type of filter. Default is ‘lowpass’.
output ({'ba', 'zpk', 'sos'}, default: 'ba') – Type of output: numerator/denominator (‘ba’), pole-zero (‘zpk’), or second-order sections (‘sos’). Default is ‘ba’ for backwards compatibility, but ‘sos’ should be used for general-purpose filtering.
fs (float, default: 1.0) – The sampling frequency of the digital system.
- Returns:
BCIpy Filter object – The filter object containing the filter and its parameters
- Return type:
- classmethod create_cheby1(sess, N, rp, Wn, btype='lowpass', implementation='ba', fs=1.0)[source]
Factory method to create a Chebyshev Type-I MindPype filter object
- Parameters:
sess (Session) – Session where the filter object will exist
N (int) – The order of the filter.
rp (float) – The maximum ripple allowed below unity gain in the passband. Specified in decibels, as a positive number.
Wn (array_like) – A scalar or length-2 sequence giving the critical frequencies. For Type I filters, this is the point in the transition band at which the gain first drops below -rp. For digital filters, Wn are in the same units as fs. By default, fs is 2 half-cycles/sample, so these are normalized from 0 to 1, where 1 is the Nyquist frequency. (Wn is thus in half-cycles / sample.) For analog filters, Wn is an angular frequency (e.g., rad/s).
btype ({'lowpass', 'highpass', 'bandpass', 'bandstop'}, default:)
lowpass – The type of filter. Default is ‘lowpass’.
output ({'ba', 'zpk', 'sos'}, default: 'ba') – Type of output: numerator/denominator (‘ba’), pole-zero (‘zpk’), or second-order sections (‘sos’). Default is ‘ba’ for backwards compatibility, but ‘sos’ should be used for general-purpose filtering.
fs (float, default: 1.0) – The sampling frequency of the digital system.
- Returns:
MindPype Filter object – The filter object containing the filter and its parameters
- Return type:
- classmethod create_cheby2(sess, N, rs, Wn, btype='lowpass', implementation='ba', fs=1.0)[source]
Factory method to create a Chebyshev Type-II MindPype filter object
- Parameters:
sess (Session) – Session where the filter object will exist
N (int) – The order of the filter.
rs (float) – The minimum attenuation required in the stop band. Specified in decibels, as a positive number.
Wn (array_like) – A scalar or length-2 sequence giving the critical frequencies. For Type II filters, this is the point in the transition band at which the gain first reaches -rs. For digital filters, Wn are in the same units as fs. By default, fs is 2 half-cycles/sample, so these are normalized from 0 to 1, where 1 is the Nyquist frequency. (Wn is thus in half-cycles / sample.) For analog filters, Wn is an angular frequency (e.g., rad/s).
btype ({'lowpass', 'highpass', 'bandpass', 'bandstop'},)
default ('lowpass') – The type of filter. Default is ‘lowpass’.
output ({'ba', 'zpk', 'sos'}, default: 'ba') – Type of output: numerator/denominator (‘ba’), pole-zero (‘zpk’), or second-order sections (‘sos’). Default is ‘ba’ for backwards compatibility, but ‘sos’ should be used for general-purpose filtering.
fs (float, default: 1.0) – The sampling frequency of the digital system.
- Returns:
MindPype Filter object – The filter object containing the filter and its parameters
- Return type:
- classmethod create_ellip(sess, N, rp, rs, Wn, btype='lowpass', implementation='ba', fs=1.0)[source]
Factory method to create a Elliptic MindPype filter object
- Parameters:
N (int) – The order of the filter.
rp (float) – The maximum ripple allowed below unity gain in the passband. Specified in decibels, as a positive number.
rs (float) – The minimum attenuation required in the stop band. Specified in decibels, as a positive number.
Wn (array_like) – A scalar or length-2 sequence giving the critical frequencies. For elliptic filters, this is the point in the transition band at which the gain first drops below -rp. For digital filters, Wn are in the same units as fs. By default, fs is 2 half-cycles/sample, so these are normalized from 0 to 1, where 1 is the Nyquist frequency. (Wn is thus in half-cycles / sample.) For analog filters, Wn is an angular frequency (e.g., rad/s).
btype ({'lowpass', 'highpass', 'bandpass', 'bandstop'}, optional) – The type of filter. Default is ‘lowpass’.
analog (bool, optional) – When True, return an analog filter, otherwise a digital filter is returned.
output ({'ba', 'zpk', 'sos'}, optional) – Type of output: numerator/denominator (‘ba’), pole-zero (‘zpk’), or second-order sections (‘sos’). Default is ‘ba’ for backwards compatibility, but ‘sos’ should be used for general-purpose filtering.
fs (float, optional) – The sampling frequency of the digital system.
- Returns:
MindPype Filter object – The filter object containing the filter and its parameters
- Return type:
- classmethod create_fir(sess, fs, low_freq=None, high_freq=None, filter_length='auto', l_trans_bandwidth='auto', h_trans_bandwidth='auto', method='fir', iir_params=None, phase='zero', fir_window='hamming', fir_design='firwin')[source]
Factory method to create a FIR MindPype filter object. Creates a Scipy.signal.firwin object and stores it in the filter object.
Note
The FIR is based on the Scipy firwin class, visit the Scipy documentation for more information on the parameters.
- Parameters:
sess (Session) – The session object to which the filter will be added
method (Other Parameters are the same as the MNE create_filter)
the (see)
<https://mne.tools/stable/generated/mne.filter.create_filter.html>_ (MNE documentation)
parameters. (for more information on the)
- Returns:
MindPype Filter object – The filter object containing the filter and its parameters
- Return type:
- Raises:
ValueError – If any value in cutoff is less than or equal to 0 or greater than or equal to fs/2, if the values in cutoff are not strictly monotonically increasing.
- ftypes = ['butter', 'cheby1', 'cheby2', 'ellip', 'bessel', 'fir']
- implementations = ['ba', 'zpk', 'sos', 'fir']