Neuro-ML uncertainty dome
A 3D scene for visualizing signals, risk clusters, and the coupling between confidence, entropy, and predictive instability in the prototype.
Public HTML interface for exploring calibrated probabilities, predictive uncertainty, neuro-inspired signals, and the topology of the artificial neural network behind the project, making the research navigable directly from the repository.
Two three-dimensional environments inside the same HTML document: one for neuroscientific dynamics and uncertainty metrics, and one for the artificial neural network used by the deep learning model.
A 3D scene for visualizing signals, risk clusters, and the coupling between confidence, entropy, and predictive instability in the prototype.
A 3D space dedicated to the classifier architecture: tabular input, temporal branch, multimodal fusion, and the final probabilistic decision layer.
Mean calibrated class probabilities across the predictive distribution.
Mean class-wise uncertainty estimated by Monte Carlo Dropout.
Observed reliability compared with predicted confidence.
Comparison across raw, calibrated, and MC Dropout inference regimes.
Compact training history for the research MVP.
Validation accuracy and macro-F1 across epochs.
Each point represents one sample. The X axis shows confidence, the Y axis shows predictive entropy, and marker size tracks mutual information.
Deterministic output before post-hoc calibration.
Output after temperature scaling.
Mean output under stochastic inference.
Text summary of the topology used by the neural network 3D scene.
Static viewer structure designed for public repository navigation.
The most ambiguous cases under stochastic inference. Interpretation should always combine probability, entropy, and mutual information.
| Sample | Predicted | True | Confidence | Entropy | MI | Mean probabilities |
|---|
Run python scripts/run_neuro_risk_mvp.py or
python scripts/infer_neuro_risk.py to regenerate
jsviz/public/latest_inference.json.