AI Robot Decision Explainer
explains robot decisions in understandable terms • shows key factors influencing AI behavior • displays confidence scores and reasoning • visualizes decision pathways used by the AI model
Robot Scenario & Inputs
Decision Explanation
The robot stopped because a red traffic light and a nearby pedestrian were detected. Confidence in this decision is high.
- Traffic Light (Red) 65%
- Pedestrian Nearby 25%
- Obstacle Distance 10%
How To Use AI Robot Decision Explainer
📝 Step 1: Choose Scenario
Select a predefined robot scenario (Navigation, Pick & Place, Collision Avoidance) to load default sensor data.
⚙️ Step 2: Review Sensor & Model Data
The tool shows simulated environment data (LIDAR, camera, GPS) and raw AI model outputs (logits/probabilities). You can edit the JSON to test different situations.
🔧 Step 3: Analyze Decision
Click "Analyze Decision" to process the inputs. The AI Explainer interprets the model's output and sensor data to generate a human-readable explanation.
📋 Step 4: Interpret Results
Review the decision summary, confidence scores, key influencing factors, and the step-by-step decision pathway. Use the "Copy" button to save the explanation.
💡 Pro Tips
- Modify the environment JSON to simulate edge cases (e.g., sensor failure).
- Compare scenarios to see how feature importance changes.
- The pathway visualization helps debug unexpected AI behavior.
- Ideal for safety audits and building trust in autonomous systems.
🔍 Example
Scenario: Robot approaches a crosswalk.
Input: Camera detects red light + pedestrian. LIDAR shows clear path.
Explanation: "Robot STOPs (85% confidence). Primary factor: Red traffic light (65% influence). Secondary: Pedestrian proximity (25%)."