API Breaking Changes
- Methods
perform_bandpass
andperform_bandstop
now usestart_freq
andstop_freq
insteadcenter_freq
andband_width
- Method
predict
fromMLModel
now returns an array of doubles instead a single value - Removed and renamed old classifiers and metrics, the idea is to replace them by ONNX models
perform_wavelet_denoising
and wavelet transforms now have more arguments to tune- In some bindings classes and structures were renamed to make naming uniform across programming languages
- Cpp: methods for FFT now has one more argument to return size
New Features
- We’ve added the support for ONNX models directly from BrainFlow code - #102
- Add method
get_custom_band_powers
, it works asget_avg_band_powers
but you can define bands by yourself BrainFlowModelParams
has two new optional fieldsoutput_name
andmax_array_size
, they may be needed for ONNX- Java: add method overloading for enums in addition to ints. e.g:
BoardIds board_id = BoardIds.SYNTHETIC_BOARD
, before it had to beint board_id = BoardIds.SYNTHETIC_BOARD.get_code()
ONNX
Check BrainFlow + ONNX tutorial to get more info about use cases and current limitations.
The main feature of this release is the integration of onnxruntime inference framework directly into BrainFlow API. There were two reasons to implement it:
- It allows users to train their own classifiers and share them with others using BrainFlow API. Now you can train your models in Python and export them to other programming languages and even to other people without pushing your code into BrainFlow repo.
- Internally old classifiers did something very similar to ONNX, we trained them in Python and exported weights to C++ using different tricks. For example we called libsvm C++ API directly, implemented KNN in C++ by ourselves, etc. With ONNX it’s unified and it’s the reason why old classifiers are removed now.
Unfortunately, it was impossible to keep ML API backward compatible. ONNX models can return multiple values and usually they do, while in BrainFlow API method predict
returned a single value. So, since we already had to make a breaking change, I’ve decided to push other API breaking changes into the same release. BoardShim
class remains the same, all changes are in DataFilter
and in MLModel
classes.
Among other changes more likely you need to pay attention to start and stop frequencies in perform_bandpass
and perform_bandstop
Wavelet Transforms and Denoising
Until this release wavelet based denoising didn’t work well, mostly because some important parameters were hardcoded and not optimized properly. Now, you can tune all of them and pick between different thresholding options.
Here is the code that you can use as a reference. Arguments listed below were optimized using Synthetic board and real data, but feel free to tune them.
import time
import matplotlib
import numpy as np
import pandas as pd
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from brainflow.board_shim import BoardShim, BrainFlowInputParams, LogLevels, BoardIds
from brainflow.data_filter import DataFilter, AggOperations, WaveletTypes, NoiseEstimationLevelTypes, WaveletExtensionTypes, ThresholdTypes, WaveletDenoisingTypes
def main():
BoardShim.enable_dev_board_logger()
params = BrainFlowInputParams()
board_id = BoardIds.SYNTHETIC_BOARD.value
board = BoardShim(board_id, params)
board.prepare_session()
board.start_stream()
time.sleep(10)
data = board.get_current_board_data(500)
board.stop_stream()
board.release_session()
eeg_channels = BoardShim.get_eeg_channels(board_id)
df = pd.DataFrame(np.transpose(data))
plt.figure()
df[eeg_channels].plot(subplots=True)
plt.savefig('before_processing.png')
for channel in enumerate(eeg_channels):
DataFilter.perform_wavelet_denoising(data[channel], WaveletTypes.BIOR3_9, 3, WaveletDenoisingTypes.SURESHRINK, ThresholdTypes.HARD, WaveletExtensionTypes.SYMMETRIC, NoiseEstimationLevelTypes.FIRST_LEVEL)
df = pd.DataFrame(np.transpose(data))
plt.figure()
df[eeg_channels].plot(subplots=True)
plt.savefig('after_processing.png')
if __name__ == "__main__":
main()