List of available kernels
Arithmetic Operations
- class mindpype.kernels.arithmetic.AbsoluteKernel(graph, inA, outA)[source]
Bases:
Unary
,Kernel
Kernel to calculate the element-wise absolute value of one MindPype data container (i.e. tensor or scalar)
Note
This kernel utilizes the numpy function
absolute
.- Parameters:
See also
Kernel
Base class for all kernels
Unary
Base class for all unary arithmetic operator kernels
- classmethod add_to_graph(graph, inA, outA, init_input=None, init_labels=None)[source]
Factory method to create an absolute value kernel node and add it to a graph.
- Parameters:
- Returns:
node – Node object containing the absolute kernel and parameters
- Return type:
- class mindpype.kernels.arithmetic.AdditionKernel(graph, inA, inB, outA)[source]
Bases:
Binary
,Kernel
Kernel to sum two MindPype data containers together
- Parameters:
See also
Kernel
Base class for all kernels
Binary
Base class for all binary arithmetic operator kernels
add_addition_node
Factory method to create an addition kernel node and add it to a graph
- classmethod add_to_graph(graph, inA, inB, outA, init_inputs=None, init_labels=None)[source]
Factory method to create an addition kernel node and add it to a graph
- Parameters:
graph (Graph) – Graph that the kernel should be added to
init_inputs (List of two Tensors or Scalars, default=None) – MindPype data containers with initialization data to be transformed and passed to downstream nodes during graph initialization
init_labels (Tensor or Array, default=None) – MindPype data container with initialization labels to be passed to downstream nodes during graph initialization
See also
AdditionKernel
Kernel to sum two MindPype data containers together
- class mindpype.kernels.arithmetic.DivisionKernel(graph, inA, inB, outA)[source]
Bases:
Binary
,Kernel
Kernel to compute the quotient of two MindPype data containers
- Parameters:
See also
Kernel
Base class for all kernels
Binary
Base class for all binary arithmetic operator kernels
add_division_node
Factory method to create a division kernel node and add it to a graph
- classmethod add_to_graph(graph, inA, inB, outA, init_inputs=None, init_labels=None)[source]
Factory method to create a division kernel node and add it to a graph.
- Parameters:
graph (Graph) – Graph that the kernel should be added to
init_inputs (List of two Tensors or Scalars, default=None) – MindPype data containers with initialization data to be transformed and passed to downstream nodes during graph initialization
init_labels (Tensor or Array, default=None) – MindPype data container with initialization labels to be passed to downstream nodes during graph initialization
See also
DivisionKernel
Kernel to compute the quotient of two MindPype data containers
- class mindpype.kernels.arithmetic.LogKernel(graph, inA, outA)[source]
Bases:
Unary
,Kernel
Kernel to perform element-wise natural logarithm operation on one MindPype data container (i.e. tensor or scalar)
Note
This kernel utilizes the numpy function
log
.- Parameters:
- classmethod add_to_graph(graph, inA, outA, init_input=None, init_labels=None)[source]
Factory method to create a log kernel node and add it to a graph.
- Parameters:
graph (Graph) – Graph that the kernel should be added to
init_input (Tensor or Scalar, default=None) – MindPype data container with initialization data to be transformed and passed to downstream nodes during graph initialization
init_labels (Tensor or Array, default=None) – MindPype data container with initialization labels to be passed to downstream nodes during graph initialization
- Returns:
node – Node object containing the log kernel and parameters
- Return type:
- class mindpype.kernels.arithmetic.MultiplicationKernel(graph, inA, inB, outA)[source]
Bases:
Binary
,Kernel
Kernel to compute the product of two MindPype data containers
Note
This is an element-wise multiplication operation
- Parameters:
See also
Kernel
Base class for all kernels
Binary
Base class for all binary arithmetic operator kernels
add_multiplication_node
Factory method to create a multiplication kernel node and add it to a graph
- classmethod add_to_graph(graph, inA, inB, outA, init_inputs=None, init_labels=None)[source]
Factory method to create a multiplication kernel node and add it to a graph
- Parameters:
graph (Graph) – Graph that the kernel should be added to
init_inputs (List of two data containers, default=None) – MindPype data containers with initialization data to be transformed and passed to downstream nodes during graph initialization
init_labels (Tensor or Array, default=None) – MindPype data container with initialization labels to be passed to downstream nodes during graph initialization
- Returns:
node – Node object that has kernel and parameter stored in it
- Return type:
- class mindpype.kernels.arithmetic.SubtractionKernel(graph, inA, inB, outA)[source]
Bases:
Binary
,Kernel
Kernel to calculate the difference between two MindPype data containers
- Parameters:
See also
Kernel
Base class for all kernels
Binary
Base class for all binary arithmetic operator kernels
add_subtraction_node
Factory method to create a subtraction kernel node and add it to a graph
- classmethod add_to_graph(graph, inA, inB, outA, init_inputs=None, init_labels=None)[source]
Factory method to create a subtraction kernel node and add it to a graph
- Parameters:
graph (Graph) – Graph that the kernel should be added to
init_inputs (List of two data containers, default=None) – MindPype data containers with initialization data to be transformed and passed to downstream nodes during graph initialization
init_labels (Tensor or Array, default=None) – MindPype data container with initialization labels to be passed to downstream nodes during graph initialization
- Returns:
node – Node object that has kernel and parameter stored in it
- Return type:
See also
SubtractionKernel
Kernel to calculate the difference between two MindPype data containers
Baseline Correction
- class mindpype.kernels.baseline_correction.BaselineCorrectionKernel(graph, inA, outA, axis=-1, baseline_period=(0, -1))[source]
Bases:
Kernel
Kernel to perform baseline correction. Input data is baseline corrected by subtracting the mean of the baseline period from the input data along a specified axis.
Note
This kernel utilizes the numpy function
mean
.- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor object) – Input data container
outA (Tensor object) – Output data container
axis (int, default = -1) – Axis along which to perform baseline correction
baseline_period (array-like (start, end)) – Baseline period where start and end are the start and end indices of the baseline period within the target axis.
- classmethod add_to_graph(graph, inputA, outputA, baseline_period=(0, -1), axis=-1, init_input=None, init_labels=None)[source]
Factory method to create a baseline correction kernel and add it to a graph
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inputA (Tensor) – Input data container
outputA (Tensor) – Output data container
baseline_period (array-like (start, end)) – Baseline period where start and end are the start and end indices of the baseline period within the target axis.
axis (int, default = -1) – Axis along which to perform baseline correction
init_input (Tensor or Scalar data container, default=None) – MindPype data container with initialization data to be transformed and passed to downstream nodes during graph initialization
init_labels (Tensor or Array data container, default=None) – MindPype data container with initialization labels to be passed to downstream nodes during graph initialization
- Returns:
node – Node object containing the baseline correction kernel and parameters
- Return type:
Classifier
- class mindpype.kernels.classifier.ClassifierKernel(graph, inA, classifier, prediction, output_probs, num_classes, initialization_data=None, labels=None)[source]
Bases:
Kernel
Kernel to classify/predict labels for input data using a MindPype Classifier object
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input data
classifier (Classifier) – MindPype Classifier object to be used for classification
Prediction (Tensor or Scalar) – Predicted labels of the classifier
output_probs (Tensor, default None) – If not None, the output will be the probability of each class.
initialization_data (Tensor or Array, default None) – Initialization data to train the classifier. If None, training data will be supplied by upstream nodes in the graph during graph initialization.
labels (Tensor or Array, default None) – Class labels for classifier training. If None, training labels will be supplied by upstream nodes in the graph during graph initialization.
- classmethod add_to_graph(graph, inA, classifier, outA, outB=None, num_classes=2, initialization_data=None, labels=None)[source]
Factory method to create a classifier kernel and add it to the graph.
- Parameters:
graph (Graph) – Graph that the kernel should be added to
classifier (Classifier) – MindPype Classifier object to be used for classification
outA (Tensor or Scalar) – Predicted labels of the classifier
outB (Tensor, default None) – The probability of each class. If None, probability output will not be computed.
initialization_data (Tensor or Array, default None) – Initialization data to train the classifier. If None, training data will be supplied by upstream nodes in the graph during graph initialization.
labels (Tensor or Array, default None) – Class labels for classifier training. If None, training labels will be supplied by upstream nodes in the graph during graph initialization.
Common Spatial Pattern (CSP)
- class mindpype.kernels.csp.CommonSpatialPatternKernel(graph, inA, outA, n_components=4, cov_est='concat', reg=None, init_data=None, labels=None)[source]
Bases:
Kernel
Kernel to apply common spatial pattern (CSP) filters to trial data. CSP works by finding spatial filters that maximize the variance for one condition while minimizing it for the other, to distinguishing between different mental states.
Note
This kernel utilizes the mne class
CSP
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – First input trial data
outA (Tensor) – Output trial data
n_components (int, default=4) – Number of components to decompose the input signals. See
CSP
for more information.cov_est (str, default='concat') – Method to estimate the covariance matrix. Options are ‘concat’ or ‘epoch’. See
CSP
for more information.reg (float, default=None) – Regularization parameter for covariance matrix estimation. See
CSP
for more information.init_data (Tensor or Array, default=None) – Initialization data to configure the filters (n_trials, n_channels, n_samples)
labels (Tensor or Array, default=None) – Labels corresponding to initialization data class labels (n_trials,)
See also
Kernel
Base class for all kernel objects
- classmethod add_to_graph(graph, inA, outA, initialization_data=None, labels=None, n_components=4, cov_est='concat', reg=None)[source]
Factory method to create a CSP filter kernel and add it as a node to a graph
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input trial data
outA (Tensor) – Filtered trial data
initialization_data (Tensor or Array, default=None) – Initialization data to configure the filters (n_trials, n_channels, n_samples)
labels (Tensor or Array, default=None) – Labels corresponding to initialization data class labels (n_trials,)
n_components (int, default=4) – Number of components to decompose the input signals. See
CSP
for more information.cov_est (str, default='concat') – Method to estimate the covariance matrix. Options are ‘concat’ or ‘epoch’. See
CSP
for more information.reg (float, default=None) – Regularization parameter for covariance matrix estimation. See
CSP
for more information.
Data Management/Manipulation
- class mindpype.kernels.datamgmt.ConcatenationKernel(graph, outA, inA, inB, axis=0)[source]
Bases:
Kernel
Kernel to concatenate multiple tensors into a single tensor
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input 1 data
inB (Tensor) – Input 2 data
outA (Tensor) – Output data
axis (int or tuple of ints, default = 0) – The axis along which the arrays will be joined. If axis is None, arrays are flattened before use. Default is 0. See numpy.concatenate for more information
- classmethod add_to_graph(graph, inA, inB, outA, axis=0, init_inputs=None, init_labels=None)[source]
Factory method to create a concatenation kernel and add it to a graph as a generic node object.
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input 1 data
inB (Tensor) – Input 2 data
outA (Tensor) – Output data
axis (int or tuple of ints, default = 0) – The axis along which the arrays will be joined. If axis is None, arrays are flattened before use. Default is 0. See numpy.concatenate for more information
- Returns:
node – The node object that was added to the graph containing the concatenation kernel
- Return type:
- class mindpype.kernels.datamgmt.EnqueueKernel(graph, inA, queue, enqueue_flag)[source]
Bases:
Kernel
Kernel to enqueue a MindPype object into a MindPype circle buffer
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (MPBase) – Input data to enqueue into circle buffer
queue (CircleBuffer) – Circle buffer to have data enqueued to
- classmethod add_to_graph(graph, inA, queue, enqueue_flag=None)[source]
Factory method to create a enqueue kernel and add it to a graph as a generic node object.
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor or Scalar or Array or CircleBuffer) – Input data to enqueue into circle buffer
queue (CircleBuffer) – Circle buffer to have data enqueued to
enqueue_flag (bool) – (optional) Scalar boolean used to determine if the inputs to be added to the queue
- Returns:
node – The node object that was added to the graph containing the enqueue kernel
- Return type:
- class mindpype.kernels.datamgmt.ExtractKernel(graph, inA, indices, outA, reduce_dims)[source]
Bases:
Kernel
Kernel to extract a portion of a tensor or array
- Parameters:
- classmethod add_to_graph(graph, inA, indices, outA, reduce_dims=False, init_input=None, init_labels=None)[source]
Factory method to create an extract kernel and add it to a graph as a generic node object.
- Parameters:
graph (Graph) – Graph that the kernel should be added to
Indicies (list slices, list of ints) – Indicies within inA from which to extract data
reduce_dims (bool, default = False) – Remove singleton dimensions if true, don’t squeeze otherwise
- Returns:
node – The node object that was added to the graph containing the extract kernel
- Return type:
- class mindpype.kernels.datamgmt.ReshapeKernel(graph, inA, outA, shape)[source]
Bases:
Kernel
Kernel to reshape a tensor
- Parameters:
- classmethod add_to_graph(graph, inA, outA, shape, init_inputs=None, init_labels=None)[source]
Factory method to create a reshape kernel and add it to a graph as a generic node object.
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input tensor
outA (Tensor) – Output tensor
shape (tuple of ints) – Shape of the output tensor
init_inputs (Tensor) – (optional) Initialization data for the graph
init_labels (Tensor) – (optional) Initialization labels for the graph
- Returns:
node – The node object that was added to the graph containing the reshape kernel
- Return type:
- class mindpype.kernels.datamgmt.StackKernel(graph, inA, outA, axis=None)[source]
Bases:
Kernel
Kernel to stack multiple tensors into a single tensor
- Parameters:
- class mindpype.kernels.datamgmt.TensorStackKernel(graph, inA, inB, outA, axis=None)[source]
Bases:
Kernel
Kernel to stack 2 tensors into a single tensor
- Parameters:
- classmethod add_to_graph(graph, inA, inB, outA, axis=0, init_inputs=None, init_labels=None)[source]
Factory method to create a tensor stack kernel and add it to a graph as a generic node object.
- Parameters:
- Returns:
node – The node object that was added to the graph containing the tensor stack kernel
- Return type:
Epoch
- class mindpype.kernels.epoch.EpochKernel(graph, inA, outA, epoch_length, epoch_stride=None, axis=-1)[source]
Bases:
Kernel
Epochs a continuous signal into a series of smaller segments
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input data
outA (Tensor) – Output data
epoch_length (int) – Length of each epoch in samples
epoch_stride (int, default=None) – Number of samples between consecutive epochs. If None, defaults to epoch_length
axis (int, default=-1) – Axis along which to epoch the data
- classmethod add_to_graph(graph, inA, outA, epoch_len, epoch_stride=None, axis=-1, init_input=None, labels=None)[source]
Factory method to create an epoch kernel and add it to a graph as a generic node object.
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input data
outA (Tensor) – Output data
epoch_len (int) – Length of each epoch in samples
epoch_stride (int, default=None) – Number of samples between consecutive epochs. If None, defaults to epoch_length
axis (int, default=-1) – Axis along which to epoch the data
Feature Normalization
- class mindpype.kernels.feature_normalization.FeatureNormalizationKernel(graph, inA, outA, method, axis=0, initialization_data=None, labels=None)[source]
Bases:
Kernel
Kernel normalizes the values within a feature vector using the method provided through the method parameter.
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input data
outA (Tensor) – Output data
method ({'min-max', 'mean-norm', 'zscore-norm'}) – Feature normalization method
axis (int, default = 0) – Axis along which to apply the filter
initialization_data (Tensor) – Initialization data to train the classifier (n_trials, n_channels, n_samples)
labels (Tensor) – Labels corresponding to initialization data class labels (n_trials, )
- classmethod add_to_graph(graph, inA, outA, method='zscore-norm', axis=0, init_data=None, labels=None)[source]
Factory method to create a feature normalization kernel
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input data
outA (Tensor) – Output data
method ({'min-max', 'mean-norm', 'zscore-norm'}) – Feature normalization method
axis (int, default = 0) – Axis along which to apply the filter
init_data (Tensor, default = None) – Initialization data
labels (Tensor, default = None) – Initialization labels
- Returns:
node – Node object that contains the kernel
- Return type:
Feature Selection
- class mindpype.kernels.feature_selection.FeatureSelectionKernel(graph, inA, outA, k=10, initialization_data=None, labels=None)[source]
Bases:
Kernel
Performs feature selection using f_classif method from sklearn.feature_selection to determine the most relevent features from the data.
Note
This kernel utilizes the
SelectKBest
class from thesklearn
package.- Parameters:
Filters
- class mindpype.kernels.filters.FiltFiltKernel(graph, inA, filt, outA, axis)[source]
-
Zero phase filter a tensor along the first non-singleton dimension
- Parameters:
- class mindpype.kernels.filters.FilterKernel(graph, inA, filt, outA, axis)[source]
-
Filter a tensor along the first non-singleton dimension
- Parameters:
Kernel Utilities
- mindpype.kernels.kernel_utils.extract_init_inputs(init_in)[source]
Extracts the initialization parameters from a potentially nested data structure
- Parameters:
init_in (Object) – The input to be extracted from
- Returns:
init_input_data – The initialization inputs as a numpy array
- Return type:
np array
Logical Operations
- class mindpype.kernels.logical.AndKernel(graph, inA, inB, outA)[source]
-
Kernel to perform logical AND operation elementwise on two MindPype data containers (i.e. tensor or scalar)
Numpy broadcasting rules apply.
Note
This kernel utilizes the numpy function
logical_and
.- Parameters:
- class mindpype.kernels.logical.Binary[source]
Bases:
object
Base class for binary logical operator kernels.
- class mindpype.kernels.logical.EqualKernel(graph, inA, inB, outA)[source]
-
Kernel to perform equal to logical operation elementwise on two MindPype data containers (i.e. tensor or scalar)
Numpy broadcasting rules apply.
- Parameters:
- class mindpype.kernels.logical.GreaterKernel(graph, inA, inB, outA)[source]
-
Kernel to perform greater than logical operation elementwise on two MindPype data containers (i.e. tensor or scalar)
Numpy broadcasting rules apply.
- Parameters:
- class mindpype.kernels.logical.LessKernel(graph, inA, inB, outA)[source]
-
Kernel to perform less than logical operation elementwise on two MindPype data containers (i.e. tensor or scalar)
Numpy broadcasting rules apply.
- Parameters:
- class mindpype.kernels.logical.NotKernel(graph, inA, outA)[source]
-
Kernel to perform logical NOT operation elementwise on one MindPype data container (i.e. tensor or scalar)
Numpy broadcasting rules apply.
Note
This kernel utilizes the numpy function
logical_not
.- Parameters:
- class mindpype.kernels.logical.OrKernel(graph, inA, inB, outA)[source]
-
Kernel to perform logical OR operation elementwise on two MindPype data containers (i.e. tensor or scalar)
Numpy broadcasting rules apply.
Note
This kernel utilizes the numpy function
logical_or
.- Parameters:
- class mindpype.kernels.logical.XorKernel(graph, inA, inB, outA)[source]
-
Kernel to perform logical XOR operation elementwise on two MindPype data containers (i.e. tensor or scalar)
Numpy broadcasting rules apply.
Note
This kernel utilizes the numpy function
logical_xor
.- Parameters:
Pad
- class mindpype.kernels.pad.PadKernel(graph, inA, outA, pad_width=None, mode='constant', stat_length=None, constant_values=0, end_values=0, reflect_type='even', **kwargs)[source]
Bases:
Kernel
Kernel to conduct padding on data
Note
This kernel utilizes the numpy function
pad
.- Parameters:
- classmethod add_to_graph(graph, inA, outA, pad_width=None, mode='constant', stat_length=None, constant_values=0, end_values=0, reflect_type='even', init_input=None, init_labels=None, **kwargs)[source]
Add a pad kernel to the graph
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input data (n_channels, n_samples) or (n_trials, n_channels, n_samples)
outA (Tensor) – Output data (n_channels, n_samples) or (n_trials, n_channels, n_samples)
pad_width (int or sequence of ints, optional) – Number of values padded to the edges of each axis. See
numpy.pad
.mode (str or function, optional) – String values or a user supplied function. See
numpy.pad
.stat_length (sequence or int, optional) – Number of values at edge of each axis used to calculate the statistic value See
numpy.pad
.constant_values (sequence or int, optional) – The values to set the padded values for each axis. See
numpy.pad
.end_values (sequence or int, optional) – The values used for the ending value of the linear_ramp and that will form the edge of the padded array. See
numpy.pad
.reflect_type (str, optional) – See
numpy.pad
.kwargs (dict, optional) – Keyword arguments for other modes. See Notes linked above.
- Returns:
node – Node that was added to the graph containing the kernel and parameters
- Return type:
Reduced Sum
- class mindpype.kernels.reduced_sum.ReducedSumKernel(graph, inA, outA, axis=None, keep_dims=False)[source]
Bases:
Kernel
Kernel to compute the sum of the input tensor’s element along the provided axis
- Parameters:
- classmethod add_to_graph(graph, inA, outA, axis=None, keep_dims=False, init_input=None, init_labels=None)[source]
Factory method to create a reduced sum kernel and add it to a graph as a generic node object.
- Parameters:
- Returns:
node – Node object that contains the kernel and its parameters
- Return type:
Resampling
- class mindpype.kernels.resample.ResampleKernel(graph, inA, factor, outA, axis=1)[source]
Bases:
Kernel
Kernel to resample timeseries data
- Parameters:
Riemann Distance
- class mindpype.kernels.riemann_distance.RiemannDistanceKernel(graph, inA, inB, outA)[source]
Bases:
Kernel
Kernel computes pairwise distances between 2D tensors according to the riemann metric
Note
This kernel utilizes the pyriemann function
distance_riemann
,- Parameters:
Riemann MDM Classifier
- class mindpype.kernels.riemann_mdm_classifier_kernel.RiemannMDMClassifierKernel(graph, inA, outA, num_classes, initialization_data, labels)[source]
Bases:
Kernel
Riemannian Minimum Distance to the Mean Classifier. Kernel takes Tensor input and produces scalar label representing the predicted class. Review classmethods for specific input parameters
Note
This kernel utilizes the pyriemann class
MDM
.- Parameters:
graph (Graph) – Graph that the kernel should be added to
initialization_data (Tensor) – Initialization data to train the classifier (n_trials, n_channels, n_samples)
labels (Tensor) – Labels corresponding to initialization data class labels (n_trials, ) (n_trials, 2) for class separated data where column 1 is the trial label and column 2 is the start index
- classmethod add_to_graph(graph, inA, outA, num_classes=2, initialization_data=None, labels=None)[source]
Factory method to create an untrained riemann minimum distance to the mean classifier kernel and add it to a graph as a generic node object.
Note that the node will have to be initialized (i.e. trained) prior to execution of the kernel.
- Parameters:
- Returns:
node – Node object that contains the kernel
- Return type:
Riemann Mean
- class mindpype.kernels.riemann_mean.RiemannMeanKernel(graph, inA, outA, weights)[source]
Bases:
Kernel
Calculates the Riemann mean of covariances contained in a tensor. Kernel takes 3D Tensor input and produces 2D Tensor representing mean
Note
This kernel utilizes the numpy function
mean_riemann
,- Parameters:
Riemann Potato
- class mindpype.kernels.riemann_potato.RiemannPotatoKernel(graph, inA, outA, thresh, max_iter, regulization, initialization_data=None)[source]
Bases:
Kernel
Kernel performs Riemannian potato artifact detection. The Riemann Potato method leverages Riemannian geometry to identify and remove artifacts by comparing covariance matrices of EEG signals to a reference matrix of clean signals. Kernel takes Tensor input (which should be covariance matrices) and produces scalar label representing the predicted class
Note
This kernel utilizes the
Potato
class from the pyriemann package.- Parameters:
- classmethod add_to_graph(graph, inA, outA, initialization_data=None, thresh=3, max_iter=100, regularization=0.01)[source]
Factory method to create a riemann potato artifact detector
- Parameters:
graph (Graph) – Graph that the kernel should be added to
initialization_data (Tensor or Array) – Data used to initialize the model
thresh (float, default = 3) – Threshold for the potato filter
max_iter (int, default = 100) – Maximum number of iterations for the potato filter
regularization (float, default = 0.01) – Regularization parameter for the potato filter
Running Average
- class mindpype.kernels.running_average.RunningAverageKernel(graph, inA, outA, running_average_len, axis=0, flush_on_init=False)[source]
Bases:
Kernel
Kernel to calculate running average across multiple trials in a session. Trials are automatically included into the next running average calculation.
- Parameters:
inA (Tensor or Scalar) – Single Trial input data to the RunningAverageKernel; should be a 2D Tensor or Scalar object
outA (Tensor or Scalar) – Output Tensor to store output of mean trial calculation; should be the same size of the input tensor or a scalar.
running_average_len (int) – Indicates the maximum number of trials that the running average kernel will be used to compute. Used to preallocate tensor to store previous trial data
axis (None or 0:) – Axis by which to calculate running average. Currently only supports mean across trials when axis = 0 (ie. Average Tensor layer values), or single value mean, axis = None
flush_on_init (bool, default = False) – If true, flushes the buffer on initialization.
- classmethod add_to_graph(graph, inA, outA, running_average_len, axis=0, flush_on_init=False, init_input=None, init_labels=None)[source]
Factory method to create running average node and add it to the specified graph
- Parameters:
graph (Graph) – The graph where the node object should be added
inA (Tensor or Scalar) – Single Trial input data to the RunningAverageKernel; should be a 2D Tensor or Scalar object
outA (Tensor or Scalar) – Output Tensor to store output of mean trial calculation; should be the same size of the input tensor or a scalar.
running_average_len (int) – Indicates the maximum number of trials that the running average kernel will be used to compute. Used to preallocate tensor to store previous trial data
axis (None or 0:) – Axis by which to calculate running average. Currently only supports mean across trials when axis = 0 (ie. Average Tensor layer values), or single value mean, axis = None
flush_on_init (bool) – If true, flushes the buffer on initialization.
- Returns:
node – The node object that was added to the graph containing the running average kernel
- Return type:
Slope
- class mindpype.kernels.slope.SlopeKernel(graph, inA, outA, Fs=1, axis=-1)[source]
Bases:
Kernel
Estimates the slope of a time series
- Parameters:
- classmethod add_to_graph(graph, inA, outA, Fs=1, axis=-1, init_input=None, init_labels=None)[source]
Factory method to create a slope estimation kernel
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input data
outA (Tensor) – Output data
Fs (int) – Sampling frequency of the input data
axis (int) – Axis along which to compute the slope
init_inputs (Tensor or Array, default = None) – Initialization data for the graph
init_labels (Tensor or Array, default = None) – Initialization labels for the graph
Statistical Operations
- class mindpype.kernels.statistics.CDFKernel(graph, inA, outA, dist, df, loc, scale)[source]
Bases:
Kernel
Calculates the CDF for a distribution given a RV as input. Currently supports normal and chi2 distributions
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input data
outA (Tensor) – Output data
dist (str, {'norm', 'chi2'}) – Distribution type
df (shape_like) – The shape parameter(s) for the distribution. See scipy.stats.chi2 docstring for more detailed information
loc (array_like, default = 0) – Location Parameter
scale (array_like, default = 1) – Scale Parameter
- classmethod add_to_graph(graph, inA, outA, dist='norm', df=None, loc=0, scale=1, init_input=None, init_labels=None)[source]
Factory method to create a CDF node
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input data
OutA – Output data
dist (str, {'norm', 'chi2'}) – Distribution type
df (shape_like) – The shape parameter(s) for the distribution. See scipy.stats.chi2 docstring for more detailed information
loc (array_like, default = 0) – Location Parameter
scale (array_like, default = 1) – Scale Parameter
init_input (None)
init_output (None)
- class mindpype.kernels.statistics.CovarianceKernel(graph, inputA, outputA, regularization)[source]
Bases:
Kernel
Kernel to compute the covariance of tensors. If the input tensor is unidimensional, will compute the variance. For higher rank tensors, highest order dimension will be treated as variables and the second highest order dimension will be treated as observations.
- Parameters:
- Tensor size examples:
Input: A (kxmxn) Output: B (kxnxn)
Input: A (m) Output: B (1)
Input: A (mxn) Output: B (nxn)
Input: A (hxkxmxn) Output: B (hxkxnxn)
- classmethod add_to_graph(graph, inputA, outputA, regularization=0, init_input=None, init_labels=None)[source]
Factory method to create a covariance kernel and add it to a graph as a generic node object.
- Parameters:
- Tensor size examples:
Input: A (kxmxn) Output: B (kxnxn)
Input: A (m) Output: B (1)
Input: A (mxn) Output: B (nxn)
Input: A (hxkxmxn) Output: B (hxkxnxn)
- class mindpype.kernels.statistics.KurtosisKernel(graph, inA, outA, axis=None, keepdims=False, bias=True, fisher=True, nan_policy='propagate')[source]
Bases:
Descriptive
,Kernel
Calculates the kurtosis of values in a tensor
Note
This kernel utilizes the scipy function
kurtosis
.Note
This kernel utilizes the scipy function
kurtosis
.- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input data
outA (Tensor) – Output data
axis (None or int or tuple of ints) – Axis or axes along which to operate. By default, flattened input in used.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one.
bias (bool) – If False, then the calculations are corrected for statistical bias. Default is True.
fisher (bool) – If True (default), Fisher’s definition is used (normal ==> 0.0). If False, Pearson’s definition is used (normal ==> 3.0).
nan_policy (str, one of {‘propagate’, ‘raise’, ‘omit’}) – Defines how to handle when input contains nan. ‘propagate’ returns nan, ‘raise’ throws an error, ‘omit’ performs the calculations ignoring nan values. Default is ‘propagate’.
init_input (Tensor or Array (optional)) – Initialization data for the graph
init_labels (Tensor or array (optional)) – Labels for the initialization data
- classmethod add_to_graph(graph, inA, outA, axis=None, keepdims=False, bias=True, fisher=True, nan_policy='propagate', init_input=None, init_labels=None)[source]
Factory method to create a kurtosis calculating kernel
Calculates the mean of values in a tensor
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input data
outA (Tensor) – Output data
axis (None or int or tuple of ints) – Axis or axes along which to operate. By default, flattened input in used.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one.
bias (bool) – If False, then the calculations are corrected for statistical bias. Default is True.
fisher (bool) – If True (default), Fisher’s definition is used (normal ==> 0.0). If False, Pearson’s definition is used (normal ==> 3.0).
nan_policy (str, one of {‘propagate’, ‘raise’, ‘omit’}) – Defines how to handle when input contains nan. ‘propagate’ returns nan, ‘raise’ throws an error, ‘omit’ performs the calculations ignoring nan values. Default is ‘propagate’.
init_input (Tensor or Array (optional)) – Initialization data for the graph
init_labels (Tensor or array (optional)) – Labels for the initialization data
- class mindpype.kernels.statistics.MaxKernel(graph, inA, outA, axis=None, keepdims=False)[source]
Bases:
Descriptive
,Kernel
Kernel to extract maximum value along a Tensor axis
Note
This kernel utilizes the numpy function
max
.Note
This kernel utilizes the numpy function
max
.- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input data
axis (None or int or tuple of ints) – Axis or axes along which to operate. By default, flattened input in used.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one.
- classmethod add_to_graph(graph, inA, outA, axis=None, keepdims=False, init_input=None, init_labels=None)[source]
Factory method to create a maximum value kernel and add it to a graph as a generic node object.
- Parameters:
graph (Graph) – Graph that the node should be added to
inA (Tensor) – Input data
axis (None or int or tuple of ints) – Axis or axes along which to operate. By default, flattened input in used.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one.
- class mindpype.kernels.statistics.MeanKernel(graph, inA, outA, axis=None, keepdims=False)[source]
Bases:
Descriptive
,Kernel
Calculates the mean of values in a tensor
Note
This kernel utilizes the numpy function
mean
.Note
This kernel utilizes the numpy function
mean
.- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input data
outA (Tensor) – Output data
axis (None or int or tuple of ints) – Axis or axes along which to operate. By default, flattened input in used.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one.
- classmethod add_to_graph(graph, inA, outA, axis=None, keepdims=False, init_input=None, init_labels=None)[source]
Factory method to create a mean calculating kernel
Calculates the mean of values in a tensor
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input data
outA (Tensor) – Output data
axis (None or int or tuple of ints) – Axis or axes along which to operate. By default, flattened input in used.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one.
- class mindpype.kernels.statistics.MinKernel(graph, inA, outA, axis=None, keepdims=False)[source]
Bases:
Descriptive
,Kernel
Kernel to extract minimum value within a Tensor
Note
This kernel utilizes the numpy function
min
.- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input data
axis (None or int or tuple of ints) – Axis or axes along which to operate. By default, flattened input in used.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one.
- classmethod add_to_graph(graph, inA, outA, axis=None, keepdims=False, init_input=None, init_labels=None)[source]
Factory method to create a minimum value kernel and add it to a graph as a generic node object.
Calculates the mean of values in a tensor
- Parameters:
graph (Graph) – Graph that the node should be added to
inA (Tensor) – Input data
outA (Tensor) – Output data
axis (None or int or tuple of ints) – Axis or axes along which to operate. By default, flattened input in used.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one.
- class mindpype.kernels.statistics.SkewnessKernel(graph, inA, outA, axis=None, keepdims=False, bias=True, nan_policy='propagate')[source]
Bases:
Descriptive
,Kernel
Calculates the Skewness of values in a tensor
Note
This kernel utilizes the scipy function
skewness
.Note
This kernel utilizes the scipy function
skew
.- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input data
outA (Tensor) – Output data
axis (None or int or tuple of ints) – Axis or axes along which to operate. By default, flattened input in used.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one.
bias (bool) – If False, then the calculations are corrected for statistical bias. Default is True.
nan_policy (str, one of {‘propagate’, ‘raise’, ‘omit’}) – Defines how to handle when input contains nan. ‘propagate’ returns nan, ‘raise’ throws an error, ‘omit’ performs the calculations ignoring nan values. Default is ‘propagate’.
init_input (Tensor or Array (optional)) – Initialization data for the graph
init_labels (Tensor or array (optional)) – Labels for the initialization data
- classmethod add_to_graph(graph, inA, outA, axis=None, keepdims=False, bias=True, nan_policy='propagate', init_input=None, init_labels=None)[source]
Factory method to create a skewness calculating kernel
Calculates the mean of values in a tensor
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input data
outA (Tensor) – Output data
axis (None or int or tuple of ints) – Axis or axes along which to operate. By default, flattened input in used.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one.
bias (bool) – If False, then the calculations are corrected for statistical bias. Default is True.
nan_policy (str, one of {‘propagate’, ‘raise’, ‘omit’}) – Defines how to handle when input contains nan. ‘propagate’ returns nan, ‘raise’ throws an error, ‘omit’ performs the calculations ignoring nan values. Default is ‘propagate’.
init_input (Tensor or Array (optional)) – Initialization data for the graph
init_labels (Tensor or array (optional)) – Labels for the initialization data
- class mindpype.kernels.statistics.StdKernel(graph, inA, outA, axis=None, ddof=0, keepdims=False)[source]
Bases:
Descriptive
,Kernel
Calculates the standard deviation of values in a tensor
Note
This kernel utilizes the numpy function
std
.Note
This kernel utilizes the numpy function
std
.- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input data
outA (Tensor) – Output data
axis (None or int or tuple of ints, optional) – Axis or axes along which the standard deviation is computed. The default is to compute the standard deviation of the flattened array.
ddof (int, optional) – Means Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. By default ddof is zero.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one.
- classmethod add_to_graph(graph, inA, outA, axis=None, ddof=0, keepdims=False, init_input=None, init_labels=None)[source]
Factory method to add a standard deviation node to a graph
Calculates the standard deviation of values in a tensor
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input data
outA (Tensor) – Output data
axis (None or int or tuple of ints, optional) – Axis or axes along which the standard deviation is computed. The default is to compute the standard deviation of the flattened array.
ddof (int, optional) – Means Delta Degrees of Freedom. The divisor used in calculations is N - ddof, where N represents the number of elements. By default ddof is zero.
keepdims (bool) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one.
- class mindpype.kernels.statistics.VarKernel(graph, inA, outA, axis, ddof, keepdims)[source]
Bases:
Descriptive
,Kernel
Calculates the variance of values in a tensor
Note
This kernel utilizes the numpy function
var
.- graphGraph
Graph that the kernel should be added to
- inATensor or Scalar
Input data
- outATensor or Scalar
Output data
- axisNone or int or tuple of ints, optional
Axis or axes along which the variance is computed. The default is to compute the variance of the flattened array.
- ddofint, optional
“Delta Degrees of Freedom”: the divisor used in the calculation is N - ddof, where N represents the number of elements. By default ddof is zero.
- keepdimsbool, optional
If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
- classmethod add_to_graph(graph, inA, outA, axis=None, ddof=0, keepdims=False, init_input=None, init_labels=None)[source]
Factory method to create a variance kernel
- Parameters:
graph (Graph) – Graph that the kernel should be added to
axis (None or int or tuple of ints, optional) – Axis or axes along which the variance is computed. The default is to compute the variance of the flattened array.
ddof (int, optional) – “Delta Degrees of Freedom”: the divisor used in the calculation is N - ddof, where N represents the number of elements. By default ddof is zero.
keepdims (bool, optional) – If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.
- class mindpype.kernels.statistics.ZScoreKernel(graph, inA, outA, init_data)[source]
Bases:
Kernel
Calculate a z-score for an tensor or scalar input
- Parameters:
Tangent Space
- class mindpype.kernels.tangent_space.TangentSpaceKernel(graph, inA, outA, initialization_data, regularization, metric, tsupdate, sample_weight)[source]
Bases:
Kernel
Kernel to estimate Tangent Space. Applies Pyriemann.tangentspace method. Kernel expects SPD matrix input.
Note
This kernel utilizes the
TangentSpace
class from the pyriemann package.- Parameters:
graph (Graph) – Graph object that this node belongs to
inA (Tensor) – Input data
outA (Tensor) – Output data
initialization_data (Tensor) – Data to initialize the estimator with (n_trials, n_channels, n_samples)
metric (bool, default = False) – See pyriemann.tangentspace for more info
metric – See pyriemann.tangentspace for more info
sample_weight (ndarray, or None, default = None) – sample of each weight. If none, all samples have equal weight
- classmethod add_to_graph(graph, inA, outA, initialization_data=None, regularization=0, metric='riemann', tsupdate=False, sample_weight=None)[source]
Factory method to create a tangent_space_kernel, add it to a node, and add the node to a specified graph
- Parameters:
graph (Graph) – Graph object that this node belongs to
inA (Tensor) – Input data
outA (Tensor) – Output data
initialization_data (Tensor, Array of Tensors) – Data to initialize the estimator with (n_trials, n_channels, n_samples)
regularization (float, default = 0) – regularization term applied to input data
metric (str, default = 'riemann') – See pyriemann.tangentspace for more info
sample_weight (ndarray, or None, default = None) – sample of each weight. If none, all samples have equal weight
- Returns:
node – Node object that was added to the graph
- Return type:
Thresholding
- class mindpype.kernels.threshold.ThresholdKernel(graph, inA, outA, thresh)[source]
Bases:
Kernel
Determine if scalar or tensor data elements are above or below threshold
- Parameters:
Transpose
- class mindpype.kernels.transpose.TransposeKernel(graph, inputA, outputA, axes)[source]
Bases:
Kernel
Kernel to compute the tensor transpose
Note
This kernel utilizes the numpy function
transpose
.- Parameters:
graph (Graph) – Graph that the kernel should be added to
inputA (Tensor) – Input data container
outputA (Tensor) – Output data container
axes (tuple or list of ints) – Specifies the axes that will be transposed. See
numpy.transpose
.
XDawn Covariances
- class mindpype.kernels.xdawn_covariances.XDawnCovarianceKernel(graph, inA, outA, initialization_data=None, labels=None, n_filters=4, classes=None)[source]
Bases:
Kernel
Kernel to perform XDawn spatial filtering and covariance estimation. The XDawn method helps to enhance the signal to noise ratio of event related potentials (ERPs) in EEG data. The algorithm works by calculating the covariance matrices of the EEG signals to improve the detection of specific brain responses (such as the P300), by emphasizing the target response and reducing the non-target response.
Note
This kernel utilizes the
XdawnCovariances
class from the pyriemann package.- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input data container
outA (Tensor) – Output data container
initialization_data (Tensor) – Data to initialize the estimator with (n_trials, n_channels, n_samples)
labels (Tensor) – Class labels for initialization data
n_filters (int, default=4) – Number of Xdawn filters per class.
classes (list of int | None, default=None) – list of classes to use for prototype estimation. If None, all classes will be used.
n_classes (int, default=2) – Number of classes to use for prototype estimation
- classmethod add_to_graph(graph, inA, outA, initialization_data=None, labels=None, num_filters=4, classes=None)[source]
Factory method to create xdawn_covariance kernel, add it to a node, and add the node to the specified graph.
- Parameters:
graph (Graph) – Graph that the kernel should be added to
inA (Tensor) – Input data container
outA (Tensor) – Output data container
initialization_data (Tensor) – Data to initialize the estimator with (n_trials, n_channels, n_samples)
labels (Tensor) – Class labels for initialization data
n_filters (int, default=4) – Number of Xdawn filters per class.
classes (list of int | None, default=None) – list of classes to use for prototype estimation. If None, all classes will be used.
n_classes (int, default=2) – Number of classes to use for prototype estimation
- Returns:
node – Node containing the kernel
- Return type: