IEEE 2025 Hackathon | Team G18
Classifying visual attention through brain signals using deep learning
Overview | Features | Models | Installation | Usage | Results | Contributing | Team
- Overview
- Features
- How It Works
- Installation
- Usage
- Project Architecture
- Models
- Results
- Configuration
- Troubleshooting
- Contributing
- Team
- References
- License
This project implements a Brain-Computer Interface (BCI) system that classifies Steady-State Visual Evoked Potentials (SSVEPs) from EEG signals. When a user focuses on a flickering visual stimulus, their brain produces electrical responses at the same frequency. Our system detects and classifies these responses to determine which stimulus the user is attending to.
SSVEP is a neural response elicited when a person focuses on a visual stimulus flickering at a constant frequency. The brain's visual cortex generates electrical activity at the same frequency as the stimulus, which can be detected through EEG electrodes. This makes SSVEP an excellent paradigm for BCI applications because:
- High signal-to-noise ratio compared to other BCI paradigms
- Minimal user training required
- Fast communication rates possible
- Robust across different users
| Domain | Use Case |
|---|---|
| Assistive Technology | Communication devices for paralyzed patients |
| Neuroprosthetics | Control of robotic limbs and wheelchairs |
| Gaming | Hands-free game control |
| Smart Home | Brain-controlled device operation |
| Research | Cognitive neuroscience studies |
| Category | Description |
|---|---|
| Signal Processing | Band-pass filtering (1-40 Hz), notch filtering (50 Hz), epoching |
| Data Augmentation | Sliding window segmentation with configurable overlap |
| Deep Learning | Custom CNN architectures optimized for EEG classification |
| Visualization | Comprehensive plotting tools for signal analysis |
| Reproducibility | Pre-generated datasets and documented preprocessing |
- Multi-frequency classification: Distinguishes between 4 SSVEP frequencies (9, 10, 12, 15 Hz)
- Real-time ready: Sliding window approach enables responsive classification
- Flexible window sizes: Supports 0.2s to 2.0s analysis windows
- Cross-subject analysis: Data from multiple subjects included
Raw EEG Signal → Preprocessing → Epoching → Sliding Windows → Neural Network → Classification
| | | | | |
.mat files Band-pass + Extract Augment data TinyEEGNet Predict which
from device Notch filter stimulus with overlap or Two-Branch frequency user
periods CNN is focusing on
- Data Acquisition: EEG recorded while subject views flickering LEDs at 9, 10, 12, and 15 Hz
- Preprocessing: Remove noise with band-pass (1-40 Hz) and notch (50 Hz) filters
- Epoching: Extract time segments corresponding to each stimulus presentation
- Segmentation: Create overlapping windows for data augmentation
- Classification: Neural network predicts which frequency the user attended to
- Python 3.9 or higher
- pip package manager
- (Optional) CUDA-compatible GPU for faster training
git clone https://github.com/anaya33/ssvep-bci-classification.git
cd ssvep-bci-classificationpython -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activatepip install -r requirements.txt| Package | Purpose |
|---|---|
numpy |
Numerical computing |
scipy |
Scientific computing and signal processing |
matplotlib |
Visualization |
scikit-learn |
Machine learning utilities and SVM baseline |
mne |
EEG/MEG data processing |
torch |
Deep learning framework |
ipykernel |
Jupyter notebook support |
-
Open the main notebook:
jupyter notebook final_code/SSVEP_BCI_Classification_G18.ipynb
-
Select a Python kernel with dependencies installed
-
Run cells sequentially from top to bottom
The notebook is organized into clearly labeled sections:
| Section | Description |
|---|---|
| Data Loading | Load raw .mat files and create MNE objects |
| Preprocessing | Apply filters and extract epochs |
| Sliding Windows | Generate augmented training data |
| Model Training | Train TinyEEGNet or Two-Branch CNN |
| Evaluation | Assess model performance |
| Visualization | Plot signals and results |
Skip preprocessing by using pre-computed datasets:
import numpy as np
# Load pre-generated sliding window data
data = np.load('final_code/training_data/epochs2_sliding_window_subject_1_1.npz')
X = data['X'] # Shape: (n_windows, n_channels, window_samples)
y = data['y'] # Shape: (n_windows,)Available window sizes: 0.2s, 0.5s, 1.0s, 1.5s, 2.0s
ieee2025hackathon_g18team/
|
|-- final_code/
| |-- SSVEP_BCI_Classification_G18.ipynb # Main notebook (use this)
| |-- static/ # Raw EEG recordings
| | |-- subject_1_fvep_led_training_1.mat
| | |-- subject_1_fvep_led_training_2.mat
| | |-- subject_2_fvep_led_training_1.mat
| | |-- subject_2_fvep_led_training_2.mat
| |
| |-- training_data/ # Pre-generated datasets
| |-- epochs{window_size}_sliding_window_subject_{id}_{session}.npz
|
|-- first_tests/ # Early prototypes (reference only)
|
|-- presentation/
| |-- SSVEP-18.mp4 # Project demo video
|
|-- requirements.txt # Python dependencies
|-- readme.md # This file
| Channel | Description |
|---|---|
| 0 | Time stamps |
| 1-8 | EEG channels (occipital region) |
| 9 | Trigger signal |
| 10 | LDA channel |
| Key | Shape | Description |
|---|---|---|
X |
(n_windows, 8, samples) |
EEG data (8 channels) |
y |
(n_windows,) |
Labels (0-3 for 4 frequencies) |
# Create MNE Raw object with 11 channels
raw = mne.io.RawArray(data, info)
# Apply filters
raw.filter(l_freq=1.0, h_freq=40.0) # Band-pass
raw.notch_filter(freqs=50.0) # Remove power line noise# Extract epochs around trigger events
epochs = mne.Epochs(raw, events, event_id, tmin=0, tmax=2.0)# Generate overlapping windows for data augmentation
window_size = 2.0 # seconds
step_size = 0.2 # seconds (80% overlap)# Train neural network
model = TinyEEGNet(n_channels=8, n_classes=4)
optimizer = torch.optim.Adam(model.parameters())
criterion = torch.nn.CrossEntropyLoss()A lightweight 1D CNN designed for efficient EEG classification.
Architecture:
Input (8 channels x samples)
|
Conv1D (8 -> 16 filters, kernel=3)
|
BatchNorm1D
|
ReLU
|
GlobalAveragePooling
|
Linear (16 -> 4 classes)
|
Output (4 class probabilities)
Characteristics:
- Minimal parameters for fast inference
- Suitable for real-time applications
- Works best with longer windows (>1.5s)
A novel architecture combining temporal and spectral features.
Architecture:
Input (8 channels x samples)
|
+------------------+
| |
Time Branch Frequency Branch
(1D Conv) (STFT -> 2D Conv)
| |
+------------------+
|
Feature Fusion
|
Classification Head
|
Output (4 classes)
Characteristics:
- Captures both time-domain and frequency-domain patterns
- Higher accuracy than TinyEEGNet
- Requires more computational resources
- Best performance with 1.5-2.0s windows
A traditional machine learning approach for comparison.
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler
# Flatten and standardize
X_flat = X.reshape(X.shape[0], -1)
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X_flat)
# Train SVM
svm = SVC(kernel='rbf')
svm.fit(X_train, y_train)| Model | Window Size | Accuracy | Notes |
|---|---|---|---|
| TinyEEGNet | 1.0s | ~25% | Near chance (4 classes) |
| TinyEEGNet | 2.0s | ~40% | Improved with longer windows |
| Two-Branch CNN | 1.5s | ~65% | Significant improvement |
| Two-Branch CNN | 2.0s | ~75% | Best performance |
| SVM Baseline | 2.0s | ~45% | Traditional ML comparison |
- Window length matters: Longer windows (1.5-2.0s) significantly improve accuracy
- Frequency features help: The two-branch architecture outperforms time-only models
- Data augmentation is crucial: Sliding windows with overlap improve generalization
| Parameter | Default | Description |
|---|---|---|
t_epoch |
2.0s | Epoch duration |
window_size |
2.0s | Sliding window length |
step_size |
0.2s | Window step (overlap = 1 - step/window) |
l_freq |
1.0 Hz | High-pass filter cutoff |
h_freq |
40.0 Hz | Low-pass filter cutoff |
notch_freq |
50.0 Hz | Notch filter frequency |
-
Add new recordings: Place
.matfiles infinal_code/static/ -
Update file paths: Modify glob patterns in data loading cells
mat_files = glob.glob('final_code/static/your_pattern_*.mat')
-
Adjust trigger mapping: If your protocol differs, update event mapping
# Default: [15, 12, 10, 9] Hz in order of appearance freq_order = [15, 12, 10, 9]
-
Tune window parameters: Trade off latency vs accuracy
window_size = 1.5 # Shorter = faster, longer = more accurate step_size = 0.1 # Smaller = more data, larger = less overlap
| Issue | Solution |
|---|---|
| Import errors | Ensure all dependencies installed: pip install -r requirements.txt |
| Path not found | Check working directory; paths are relative to repo root |
| CUDA out of memory | Reduce batch size or use CPU: device = 'cpu' |
| Low accuracy | Try longer window sizes (1.5-2.0s) |
| Trigger detection fails | Verify trigger channel index and threshold |
Some early cells may reference Colab-style paths (/content/). For local execution, ensure paths point to:
- Raw data:
final_code/static/ - Processed data:
final_code/training_data/
PyTorch automatically uses CUDA if available:
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(f"Using device: {device}")Contributions are welcome! Here's how to help:
- Bug fixes: Found an issue? Submit a pull request
- New models: Implement additional architectures (EEGNet, Transformer, etc.)
- Documentation: Improve explanations or add tutorials
- Visualization: Create better plots or interactive dashboards
- Optimization: Improve training speed or model efficiency
- Fork the repository
- Create a feature branch
git checkout -b feature/your-feature-name
- Make your changes
- Test thoroughly
- Submit a pull request with a clear description
| Difficulty | Task |
|---|---|
| Easy | Add more visualization functions |
| Easy | Improve code documentation |
| Medium | Implement EEGNet architecture |
| Medium | Add cross-validation support |
| Medium | Create a training script (separate from notebook) |
| Advanced | Implement online/streaming classification |
| Advanced | Add transfer learning between subjects |
| Advanced | Build a real-time demo application |
| Name | GitHub |
|---|---|
| Haocheng Wu | @TedHaochengWu |
| Mohammadreza Behbood | @mudcontract |
| Soukaina Hamou | @SoukainaHAMOU |
| Nathan Yu | @Littnatenate |
| Jeronimo Sanchez Santamaria | @JeronimoSantamaria |
| Flora Santos | - |
| Anaya Yorke | @anaya33 |
Watch our complete project walkthrough: presentation/SSVEP-18.mp4
- MNE-Python - EEG/MEG analysis toolkit
- PyTorch - Deep learning framework
- scikit-learn - Machine learning library
- Vialatte, F. B., et al. "Steady-state visually evoked potentials: focus on essential paradigms and future perspectives." Progress in neurobiology (2010)
- Lawhern, V. J., et al. "EEGNet: a compact convolutional neural network for EEG-based brain-computer interfaces." Journal of neural engineering (2018)
This project is licensed under the MIT License - see the LICENSE file for details.
IEEE 2025 Hackathon | Team G18