Getting Started
Installation
From Source (Recommended)
git clone https://github.com/lhzn-io/biologger-pseudotrack.git
cd biologger-pseudotrack
pip install -e .
From PyPI (When Released)
pip install biologger-pseudotrack
From TestPyPI (Development)
pip install --index-url https://test.pypi.org/simple/ \
--extra-index-url https://pypi.org/simple \
biologger-pseudotrack
Basic Usage
Command-Line Interface
# Process swordfish deployment (adaptive sensor fusion mode)
python -m biologger_pseudotrack --config examples/swordfish_config.yml
# Process whale shark deployment (post-facto mode)
python -m biologger_pseudotrack --config examples/whaleshark_postfacto.yml
Python API
from biologger_pseudotrack.streaming.processor import StreamingProcessor
# Create adaptive sensor fusion processor
processor = StreamingProcessor(filt_len=48, freq=16)
# Process data in real-time
for record in sensor_data:
result = processor.process(record)
print(f"Pitch: {result['pitch_deg']:.2f}°")
print(f"Roll: {result['roll_deg']:.2f}°")
Configuration
The pipeline uses YAML configuration files. Example configurations are provided in the examples/ directory:
swordfish_config.yml- Swordfish adaptive processingwhale_shark_config.yml- Whale shark processingseal_config.yml- Seal processing
Calibration Modes
Both pipelines share a unified calibration: config block with three modes:
- Progressive (adaptive default)
Accumulates calibration data online using exponential moving averages. Memory-efficient, suitable for real-time processing. Converges within first 2-3 minutes of deployment.
- Fixed (pre-computed values)
Uses locked calibration parameters from prior runs. Fastest processing (single-pass, no calibration overhead). Requires prior calibration from batch_compute or R analysis.
- Batch Compute (post-facto only)
Two-pass processing: collect full dataset, compute calibrations, reprocess. Matches R gRumble’s
colMeans()andMagOffset()exactly. Validation target: <0.1° error vs. R reference implementation.
Example Configuration
# config_swordfish_postfacto.yml
input:
file: "data/Swordfish-RED001_20220812_19A0564/19A0564.csv"
postfacto:
parameters:
filt_len: 48
freq: 16
calibration:
attachment_angle_mode: 'batch_compute'
magnetometer_mode: 'batch_compute'
enable_depth_interpolation: true
output:
file: "output/swordfish_processed.csv"
Next Steps
See Pipeline Architecture: From Lab to Field for detailed pipeline architecture
Check example configurations in
examples/directoryReview species-specific configs in
data/subdirectories