791 lines
23 KiB
Markdown
791 lines
23 KiB
Markdown
# Deep Research: GOFAI, Symbolic AI, and Non-Cloud AI Architectures
|
|
|
|
**Research Spike for:** Timmy Foundation
|
|
**Date:** 2026-03-30
|
|
**Researcher:** Allegro
|
|
**Scope:** Classical AI approaches for sovereign, offline-capable AI systems
|
|
|
|
---
|
|
|
|
## Executive Summary
|
|
|
|
This research explores **Good Old-Fashioned AI (GOFAI)** and **symbolic AI** approaches to expand Timmy's capabilities while reducing cloud dependence. The key finding: **hybrid neuro-symbolic architectures** offer the best path forward, combining neural pattern recognition with symbolic reasoning for transparent, auditable, and offline-capable AI.
|
|
|
|
### Core Recommendations
|
|
|
|
1. **Adopt Finite State Machines (FSM)** for behavior control and mode switching
|
|
2. **Implement production rule systems** for explicit reasoning chains
|
|
3. **Build knowledge graphs** for structured memory and inference
|
|
4. **Develop symbolic-numeric hybrids** for robust decision-making
|
|
5. **Create verifiable reasoning traces** for transparency
|
|
|
|
---
|
|
|
|
## 1. GOFAI Foundations
|
|
|
|
### 1.1 What is GOFAI?
|
|
|
|
**Good Old-Fashioned AI** refers to symbolic AI approaches dominant from the 1950s-1980s, characterized by:
|
|
- Explicit knowledge representation
|
|
- Logical reasoning and inference
|
|
- Rule-based systems
|
|
- Search algorithms
|
|
- Structured knowledge bases
|
|
|
|
**Key Insight for Timmy:** GOFAI systems run entirely locally, require minimal compute, and produce **verifiable, explainable** outputs—critical for sovereign AI.
|
|
|
|
### 1.2 Core GOFAI Techniques
|
|
|
|
#### 1.2.1 Production Rule Systems
|
|
|
|
```
|
|
IF <condition> THEN <action>
|
|
|
|
Example for Timmy:
|
|
IF heartbeat_latency > 1000ms AND mlx_active = true
|
|
THEN reduce_model_size OR increase_batch_size
|
|
```
|
|
|
|
**Benefits:**
|
|
- Transparent decision logic
|
|
- Easy to audit and modify
|
|
- No training required
|
|
- Runs on minimal hardware
|
|
|
|
**Implementations:**
|
|
- **CLIPS** (C Language Integrated Production System) - NASA-developed, battle-tested
|
|
- **Drools** - Java-based, enterprise grade
|
|
- **PyKE** (Python Knowledge Engine) - Python-native
|
|
- **Simple custom implementation** - 200 lines of Python
|
|
|
|
#### 1.2.2 Frame-Based Systems
|
|
|
|
Frames are data structures for representing stereotyped situations:
|
|
|
|
```python
|
|
frame = {
|
|
"type": "heartbeat_event",
|
|
"slots": {
|
|
"timestamp": {"value": "2026-03-30T02:15:00Z", "type": "datetime"},
|
|
"latency_ms": {"value": 245, "type": "integer", "range": [0, 10000]},
|
|
"mlx_active": {"value": True, "type": "boolean"},
|
|
"inference_time_ms": {"value": 1200, "type": "integer"}
|
|
},
|
|
"relations": {
|
|
"preceded_by": "event_2026-03-30T02:00:00Z",
|
|
"triggers": ["generate_report", "check_health"]
|
|
}
|
|
}
|
|
```
|
|
|
|
**Benefits for Timmy:**
|
|
- Structured memory representation
|
|
- Inheritance for efficient knowledge organization
|
|
- Default values for handling missing information
|
|
- Attach procedures (daemons) for automatic actions
|
|
|
|
#### 1.2.3 Semantic Networks
|
|
|
|
Graph-based knowledge representation:
|
|
|
|
```
|
|
[Timmy] --is_a--> [AI_System]
|
|
[Timmy] --runs_on--> [Mac_Studio]
|
|
[Mac_Studio] --has--> [MLX]
|
|
[MLX] --enables--> [Local_Inference]
|
|
[Local_Inference] --requires--> [Model_Weights]
|
|
```
|
|
|
|
**Inference:** Path finding reveals implicit knowledge (e.g., Timmy requires Model_Weights)
|
|
|
|
---
|
|
|
|
## 2. Finite State Machines for Agent Control
|
|
|
|
### 2.1 FSM Architecture for Timmy
|
|
|
|
FSMs provide **predictable, verifiable** behavior control—essential for autonomous systems.
|
|
|
|
```python
|
|
class TimmyStateMachine:
|
|
states = {
|
|
'IDLE': {
|
|
'on_enter': 'log_entry',
|
|
'transitions': {
|
|
'heartbeat_due': 'CHECKING',
|
|
'command_received': 'PROCESSING',
|
|
'error_detected': 'RECOVERING'
|
|
}
|
|
},
|
|
'CHECKING': {
|
|
'on_enter': 'perform_health_check',
|
|
'transitions': {
|
|
'healthy': 'GENERATING',
|
|
'unhealthy': 'RECOVERING'
|
|
}
|
|
},
|
|
'GENERATING': {
|
|
'on_enter': 'create_artifact',
|
|
'transitions': {
|
|
'success': 'BROADCASTING',
|
|
'failure': 'RECOVERING'
|
|
}
|
|
},
|
|
'BROADCASTING': {
|
|
'on_enter': 'publish_to_relay',
|
|
'transitions': {
|
|
'published': 'IDLE',
|
|
'failed': 'QUEUING'
|
|
}
|
|
},
|
|
'QUEUING': {
|
|
'on_enter': 'store_for_retry',
|
|
'transitions': {
|
|
'retry_ok': 'BROADCASTING',
|
|
'max_retries': 'ALERTING'
|
|
}
|
|
},
|
|
'RECOVERING': {
|
|
'on_enter': 'attempt_recovery',
|
|
'transitions': {
|
|
'recovered': 'IDLE',
|
|
'unrecoverable': 'ALERTING'
|
|
}
|
|
},
|
|
'PROCESSING': {
|
|
'on_enter': 'execute_command',
|
|
'transitions': {
|
|
'complete': 'IDLE',
|
|
'needs_more_time': 'PROCESSING'
|
|
}
|
|
},
|
|
'ALERTING': {
|
|
'on_enter': 'notify_operator',
|
|
'transitions': {
|
|
'acknowledged': 'IDLE'
|
|
}
|
|
}
|
|
}
|
|
```
|
|
|
|
### 2.2 Hierarchical FSMs
|
|
|
|
For complex behavior, use **HFSMs** (state machines within states):
|
|
|
|
```
|
|
[OPERATIONAL]
|
|
├── [HEALTHY]
|
|
│ ├── [IDLE]
|
|
│ ├── [GENERATING]
|
|
│ └── [BROADCASTING]
|
|
└── [DEGRADED]
|
|
├── [REDUCED_FUNCTIONALITY]
|
|
└── [QUEUING_FOR_LATER]
|
|
|
|
[MAINTENANCE]
|
|
├── [UPDATING_MODEL]
|
|
└── [BACKING_UP]
|
|
|
|
[ERROR]
|
|
├── [RECOVERABLE]
|
|
└── [CRITICAL]
|
|
```
|
|
|
|
### 2.3 FSM Benefits for Timmy
|
|
|
|
| Aspect | Benefit |
|
|
|--------|---------|
|
|
| **Predictability** | Every state transition is explicit and testable |
|
|
| **Debuggability** | Current state always known; full trace available |
|
|
| **Recovery** | Clear paths from error states back to operation |
|
|
| **Resource Management** | States can manage memory/MLX loading explicitly |
|
|
| **Verification** | Model checking can prove safety properties |
|
|
| **Offline Operation** | No cloud dependency; fully local |
|
|
|
|
### 2.4 Implementation Approaches
|
|
|
|
**Option A: Python-Transitions (Lightweight)**
|
|
```python
|
|
from transitions import Machine
|
|
|
|
class Timmy(object):
|
|
pass
|
|
|
|
timmy = Timmy()
|
|
machine = Machine(timmy, states=['idle', 'checking'],
|
|
transitions=[['check', 'idle', 'checking']],
|
|
initial='idle')
|
|
```
|
|
|
|
**Option B: Custom Implementation (Full Control)**
|
|
- 300 lines of Python
|
|
- No dependencies
|
|
- Full introspection
|
|
- Serializable state
|
|
|
|
**Option C: Ragel (State Machine Compiler)**
|
|
- Compiles FSM to efficient code
|
|
- Used in network protocols
|
|
- Overkill for Timmy initially
|
|
|
|
---
|
|
|
|
## 3. Neuro-Symbolic Integration
|
|
|
|
### 3.1 The Hybrid Approach
|
|
|
|
Pure neural (MLX) + Pure symbolic (GOFAI) = **Neuro-Symbolic AI**
|
|
|
|
```
|
|
┌─────────────────────────────────────────────────────┐
|
|
│ NEURAL LAYER │
|
|
│ (MLX-based pattern recognition, embeddings) │
|
|
│ │
|
|
│ Input: Raw observations │
|
|
│ Output: Structured symbols, embeddings │
|
|
└──────────────────┬──────────────────────────────────┘
|
|
│ symbols/vectors
|
|
▼
|
|
┌─────────────────────────────────────────────────────┐
|
|
│ SYMBOLIC LAYER │
|
|
│ (Rule-based reasoning, FSM, knowledge graphs) │
|
|
│ │
|
|
│ Input: Symbols from neural layer │
|
|
│ Output: Actions, decisions, plans │
|
|
└──────────────────┬──────────────────────────────────┘
|
|
│ actions
|
|
▼
|
|
┌─────────────────────────────────────────────────────┐
|
|
│ ACTUATION LAYER │
|
|
│ (Git commits, Nostr events, file operations) │
|
|
└─────────────────────────────────────────────────────┘
|
|
```
|
|
|
|
### 3.2 Neural → Symbolic Bridge
|
|
|
|
**Perception → Symbolization:**
|
|
|
|
```python
|
|
def neural_to_symbolic(raw_observation):
|
|
"""MLX processes raw data, outputs structured symbols"""
|
|
|
|
# Neural processing
|
|
embedding = mlx_model.encode(raw_observation)
|
|
|
|
# Classification (neural)
|
|
category = classify_with_mlx(embedding)
|
|
|
|
# Symbolization (discretization)
|
|
symbol = {
|
|
'type': 'observation',
|
|
'category': category, # From neural classifier
|
|
'urgency': discretize_urgency(embedding), # Low/Medium/High
|
|
'confidence': quantize_confidence(embedding), # 0-100%
|
|
'raw_embedding': compress_for_storage(embedding)
|
|
}
|
|
|
|
return symbol
|
|
```
|
|
|
|
### 3.3 Symbolic → Neural Bridge
|
|
|
|
**Prompt Engineering as Symbolic Control:**
|
|
|
|
```python
|
|
def symbolic_to_neural(symbolic_goal, context):
|
|
"""Symbolic planner generates prompts for neural generation"""
|
|
|
|
# Symbolic reasoning about what to generate
|
|
if symbolic_goal['type'] == 'self_reflection':
|
|
prompt_template = """You are {persona}.
|
|
|
|
Current state: {state}
|
|
Recent observations: {observations}
|
|
Active concerns: {concerns}
|
|
|
|
Reflect on your situation and generate insights:
|
|
1. What patterns do you notice?
|
|
2. What should you prioritize?
|
|
3. What actions should you take?"""
|
|
|
|
prompt = instantiate_template(prompt_template, context)
|
|
|
|
# Neural generation
|
|
response = mlx_generate(prompt, max_tokens=500)
|
|
|
|
return response
|
|
```
|
|
|
|
### 3.4 Case Study: Heartbeat Decision
|
|
|
|
**Pure Neural Approach:**
|
|
```python
|
|
# MLX decides when to heartbeat based on learned patterns
|
|
# Problem: Opaque, might miss edge cases, requires training data
|
|
```
|
|
|
|
**Pure Symbolic Approach:**
|
|
```python
|
|
# Fixed 5-minute intervals
|
|
# Problem: Inflexible, doesn't adapt to load
|
|
```
|
|
|
|
**Neuro-Symbolic Approach:**
|
|
```python
|
|
def decide_heartbeat_timing():
|
|
# Symbolic: Always have maximum interval
|
|
max_interval = 300 # 5 minutes (symbolic constraint)
|
|
|
|
# Neural: MLX predicts optimal timing based on activity
|
|
predicted_optimal = mlx_predict_best_timing(
|
|
recent_activity, system_load, user_patterns
|
|
)
|
|
|
|
# Symbolic: Apply constraints to neural suggestion
|
|
actual_interval = min(predicted_optimal, max_interval)
|
|
|
|
# Symbolic: FSM state affects timing
|
|
if current_state == 'DEGRADED':
|
|
actual_interval = max_interval # Don't stress system
|
|
|
|
return actual_interval
|
|
```
|
|
|
|
---
|
|
|
|
## 4. Knowledge Representation for Timmy
|
|
|
|
### 4.1 Local Knowledge Graph
|
|
|
|
```python
|
|
knowledge_graph = {
|
|
"entities": {
|
|
"timmy": {
|
|
"type": "ai_agent",
|
|
"attributes": {
|
|
"location": "local_mac",
|
|
"inference_engine": "mlx",
|
|
"communication": "nostr",
|
|
"state": "operational"
|
|
},
|
|
"relations": {
|
|
"owns": ["artifact_repo", "model_weights"],
|
|
"communicates_with": ["allegro", "ezra", "bezalel"],
|
|
"depends_on": ["relay", "mlx", "git"]
|
|
}
|
|
},
|
|
"relay": {
|
|
"type": "infrastructure",
|
|
"attributes": {
|
|
"host": "167.99.126.228",
|
|
"port": 3334,
|
|
"protocol": "nostr",
|
|
"status": "active"
|
|
}
|
|
}
|
|
},
|
|
"rules": [
|
|
{
|
|
"if": "timmy.state == 'operational' AND relay.status == 'down'",
|
|
"then": "timmy.state = 'degraded'",
|
|
"priority": 10
|
|
},
|
|
{
|
|
"if": "timmy.heartbeat_latency > 5000",
|
|
"then": "alert_operator",
|
|
"priority": 8
|
|
}
|
|
]
|
|
}
|
|
```
|
|
|
|
### 4.2 Querying the Knowledge Graph
|
|
|
|
```python
|
|
def query_kg(graph, pattern):
|
|
"""Simple graph query engine"""
|
|
|
|
# Query: What depends on the relay?
|
|
if pattern == "depends_on:relay":
|
|
return [entity for entity, data in graph["entities"].items()
|
|
if "relay" in data.get("relations", {}).get("depends_on", [])]
|
|
|
|
# Query: What is Timmy's current state?
|
|
if pattern == "timmy.state":
|
|
return graph["entities"]["timmy"]["attributes"]["state"]
|
|
|
|
# Query: Path from Timmy to Internet
|
|
if pattern == "path:timmy->internet":
|
|
return ["timmy", "relay", "internet"] # Simplified
|
|
|
|
# Usage
|
|
dependents = query_kg(knowledge_graph, "depends_on:relay")
|
|
# Returns: ['timmy']
|
|
```
|
|
|
|
### 4.3 Integration with Obsidian
|
|
|
|
Knowledge graph can sync with your Obsidian vault:
|
|
|
|
```python
|
|
def export_to_obsidian(kg, vault_path):
|
|
"""Export knowledge graph as Obsidian markdown"""
|
|
|
|
for entity_id, data in kg["entities"].items():
|
|
filename = f"{vault_path}/Entities/{entity_id}.md"
|
|
|
|
content = f"""# {entity_id}
|
|
|
|
**Type:** {data['type']}
|
|
|
|
## Attributes
|
|
{chr(10).join(f"- **{k}:** {v}" for k, v in data['attributes'].items())}
|
|
|
|
## Relations
|
|
{chr(10).join(f"- **{rel}:** {', '.join(targets)}" for rel, targets in data['relations'].items())}
|
|
|
|
## Dataview Query
|
|
```dataview
|
|
LIST FROM [[{entity_id}]]
|
|
```
|
|
"""
|
|
|
|
with open(filename, 'w') as f:
|
|
f.write(content)
|
|
```
|
|
|
|
---
|
|
|
|
## 5. Implementing Sophistication Without Cloud
|
|
|
|
### 5.1 Local Training Pipeline
|
|
|
|
**The Challenge:** Modern LLM training requires massive compute
|
|
|
|
**GOFAI Alternative:** Symbolic knowledge compilation
|
|
|
|
```python
|
|
def symbolic_training(accepted_prs, rejected_prs):
|
|
"""
|
|
Instead of gradient descent on neural weights,
|
|
extract symbolic rules from accepted changes.
|
|
"""
|
|
|
|
rules = []
|
|
|
|
for pr in accepted_prs:
|
|
# Analyze what made this PR acceptable
|
|
patterns = extract_patterns(pr['diff'])
|
|
|
|
# Compile into rule
|
|
rule = {
|
|
'if': f"change_matches({patterns['conditions']})",
|
|
'then': 'approve_with_confidence_high',
|
|
'learned_from': pr['id'],
|
|
'success_rate': 1.0
|
|
}
|
|
rules.append(rule)
|
|
|
|
for pr in rejected_prs:
|
|
# Learn negative rules
|
|
patterns = extract_patterns(pr['diff'])
|
|
|
|
rule = {
|
|
'if': f"change_matches({patterns['conditions']})",
|
|
'then': 'reject_with_explanation',
|
|
'learned_from': pr['id'],
|
|
'success_rate': 0.0
|
|
}
|
|
rules.append(rule)
|
|
|
|
return rules
|
|
```
|
|
|
|
### 5.2 Incremental Learning
|
|
|
|
```python
|
|
def update_knowledge(new_observation, outcome):
|
|
"""
|
|
Bayesian-style knowledge updates without neural training.
|
|
"""
|
|
|
|
# Find matching rules
|
|
matching_rules = find_matching_rules(new_observation)
|
|
|
|
for rule in matching_rules:
|
|
# Update rule confidence (Bayesian update)
|
|
prior = rule['confidence']
|
|
likelihood = 0.9 if outcome == 'success' else 0.1
|
|
|
|
rule['confidence'] = bayesian_update(prior, likelihood)
|
|
rule['activations'] += 1
|
|
```
|
|
|
|
### 5.3 Model Compression Techniques
|
|
|
|
**For MLX on Mac:**
|
|
|
|
| Technique | Description | Benefit |
|
|
|-----------|-------------|---------|
|
|
| **Quantization** | Reduce precision (FP32 → INT8) | 4x smaller, faster inference |
|
|
| **Pruning** | Remove unused weights | 2-10x smaller |
|
|
| **Distillation** | Train small model to mimic large | Maintain quality, reduce size |
|
|
| **Knowledge Graph** | Replace some neural with symbolic | Zero inference cost for rules |
|
|
|
|
### 5.4 Hierarchical Caching Strategy
|
|
|
|
```python
|
|
class HierarchicalCache:
|
|
"""
|
|
Multi-level cache for MLX responses.
|
|
Avoids redundant inference.
|
|
"""
|
|
|
|
def __init__(self):
|
|
self.l1_symbolic = {} # Exact symbol matches (O(1))
|
|
self.l2_embedding = {} # Similar embeddings (approximate)
|
|
self.l3_pattern = {} # Pattern-based (regex/rules)
|
|
self.l4_neural = None # MLX (slowest, most general)
|
|
|
|
def get(self, query):
|
|
# L1: Exact symbolic match
|
|
if query in self.l1_symbolic:
|
|
return self.l1_symbolic[query]
|
|
|
|
# L2: Embedding similarity
|
|
query_emb = embed(query)
|
|
similar = find_similar(query_emb, self.l2_embedding, threshold=0.95)
|
|
if similar:
|
|
return similar['response']
|
|
|
|
# L3: Pattern matching
|
|
for pattern, response in self.l3_pattern.items():
|
|
if pattern_matches(pattern, query):
|
|
return response
|
|
|
|
# L4: Neural generation (slowest)
|
|
response = mlx_generate(query)
|
|
|
|
# Cache for future
|
|
self._cache(query, query_emb, response)
|
|
|
|
return response
|
|
```
|
|
|
|
---
|
|
|
|
## 6. Verification and Safety
|
|
|
|
### 6.1 Formal Verification of FSM
|
|
|
|
```python
|
|
def verify_fsm(machine):
|
|
"""
|
|
Model checking for safety properties.
|
|
"""
|
|
|
|
# Property: No deadlock (every state has outgoing transition)
|
|
for state_name, state_def in machine.states.items():
|
|
if not state_def.get('transitions'):
|
|
print(f"WARNING: State {state_name} has no outgoing transitions!")
|
|
|
|
# Property: No unreachable states
|
|
reachable = compute_reachable_states(machine, 'initial')
|
|
for state_name in machine.states:
|
|
if state_name not in reachable:
|
|
print(f"WARNING: State {state_name} is unreachable!")
|
|
|
|
# Property: Error recovery (every error state can reach idle)
|
|
for state_name in machine.states:
|
|
if 'error' in state_name.lower() or 'fail' in state_name.lower():
|
|
can_recover = path_exists(machine, state_name, 'idle')
|
|
if not can_recover:
|
|
print(f"CRITICAL: Error state {state_name} cannot recover to idle!")
|
|
```
|
|
|
|
### 6.2 Explainability via Symbolic Traces
|
|
|
|
```python
|
|
def generate_explanation(decision, trace):
|
|
"""
|
|
Generate human-readable explanation of AI decision.
|
|
"""
|
|
|
|
explanation = []
|
|
explanation.append(f"Decision: {decision['action']}")
|
|
explanation.append("")
|
|
explanation.append("Reasoning chain:")
|
|
|
|
for step in trace:
|
|
if step['type'] == 'rule_fired':
|
|
explanation.append(f" - Rule '{step['rule_name']}' fired because: {step['condition']}")
|
|
elif step['type'] == 'neural_suggestion':
|
|
explanation.append(f" - Neural model suggested with {step['confidence']:.0%} confidence")
|
|
elif step['type'] == 'symbolic_override':
|
|
explanation.append(f" - Symbolic constraint overrode neural suggestion: {step['reason']}")
|
|
|
|
return "\n".join(explanation)
|
|
|
|
# Example output:
|
|
"""
|
|
Decision: Reduce heartbeat interval to 3 minutes
|
|
|
|
Reasoning chain:
|
|
- Rule 'high_activity_detected' fired because: 5 PRs merged in last hour
|
|
- Neural model suggested with 78% confidence: increase frequency
|
|
- Symbolic constraint overrode neural suggestion: max_frequency <= 20/hour
|
|
- Final decision: 3 minutes (20/hour) balances responsiveness and resource use
|
|
"""
|
|
```
|
|
|
|
---
|
|
|
|
## 7. Implementation Roadmap
|
|
|
|
### Phase 1: Foundation (Immediate)
|
|
1. **Implement basic FSM** for Timmy's core loop
|
|
2. **Create rule engine** (50 production rules)
|
|
3. **Build knowledge graph** schema
|
|
|
|
### Phase 2: Integration (2 weeks)
|
|
1. **Neural-symbolic bridge** for MLX integration
|
|
2. **Hierarchical cache** for inference optimization
|
|
3. **Symbolic training** from accepted PRs
|
|
|
|
### Phase 3: Sophistication (1 month)
|
|
1. **Advanced reasoning** (forward/backward chaining)
|
|
2. **Knowledge graph** population from artifacts
|
|
3. **Self-modifying rules** (learning)
|
|
|
|
### Phase 4: Verification (6 weeks)
|
|
1. **Formal verification** of critical FSMs
|
|
2. **Comprehensive testing** of rule systems
|
|
3. **Safety constraints** implementation
|
|
|
|
---
|
|
|
|
## 8. Code Examples
|
|
|
|
### 8.1 Minimal FSM Implementation
|
|
|
|
```python
|
|
class FSM:
|
|
def __init__(self, states, transitions, initial):
|
|
self.states = states
|
|
self.transitions = transitions
|
|
self.state = initial
|
|
self.history = []
|
|
|
|
def trigger(self, event):
|
|
if event in self.transitions.get(self.state, {}):
|
|
new_state = self.transitions[self.state][event]
|
|
self.history.append((self.state, event, new_state))
|
|
self.state = new_state
|
|
return True
|
|
return False
|
|
|
|
def can(self, event):
|
|
return event in self.transitions.get(self.state, {})
|
|
|
|
# Usage for Timmy
|
|
timmy_fsm = FSM(
|
|
states=['idle', 'working', 'error'],
|
|
transitions={
|
|
'idle': {'heartbeat': 'working', 'error': 'error'},
|
|
'working': {'complete': 'idle', 'error': 'error'},
|
|
'error': {'recover': 'idle'}
|
|
},
|
|
initial='idle'
|
|
)
|
|
```
|
|
|
|
### 8.2 Minimal Rule Engine
|
|
|
|
```python
|
|
class RuleEngine:
|
|
def __init__(self):
|
|
self.rules = []
|
|
|
|
def add_rule(self, condition, action, priority=0):
|
|
self.rules.append({
|
|
'condition': condition,
|
|
'action': action,
|
|
'priority': priority
|
|
})
|
|
self.rules.sort(key=lambda r: -r['priority'])
|
|
|
|
def evaluate(self, facts):
|
|
for rule in self.rules:
|
|
if rule['condition'](facts):
|
|
return rule['action'](facts)
|
|
return None
|
|
|
|
# Usage
|
|
engine = RuleEngine()
|
|
engine.add_rule(
|
|
condition=lambda f: f['latency'] > 1000,
|
|
action=lambda f: "reduce_load",
|
|
priority=10
|
|
)
|
|
```
|
|
|
|
---
|
|
|
|
## 9. References and Resources
|
|
|
|
### Papers
|
|
- **"Neuro-Symbolic AI: The 3rd Wave"** - Artur d'Avila Garcez et al.
|
|
- **"The Garden of Forking Paths"** - FSM analysis in AI
|
|
- **"Making AI Intelligible"** - Explainable symbolic systems
|
|
|
|
### Books
|
|
- **"Artificial Intelligence: A Modern Approach"** (Russell & Norvig) - Chapters 8-12
|
|
- **"Paradigms of Artificial Intelligence Programming"** (Peter Norvig)
|
|
- **"Knowledge Representation and Reasoning"** (Brachman & Levesque)
|
|
|
|
### Tools
|
|
- **Python-Transitions:** `pip install transitions`
|
|
- **CLIPS:** `pip install clipspy`
|
|
- **NetworkX:** Knowledge graph operations
|
|
- **SymPy:** Symbolic mathematics
|
|
|
|
### Code Repositories
|
|
- github.com/pytransitions/transitions
|
|
- github.com/norvig/paip-lisp (classic AI algorithms)
|
|
|
|
---
|
|
|
|
## 10. Summary
|
|
|
|
### Key Takeaways
|
|
|
|
1. **Symbolic AI runs entirely offline** - No cloud required
|
|
2. **Neuro-symbolic combines strengths** - Pattern recognition + reasoning
|
|
3. **FSMs provide verifiable control** - Predictable, testable behavior
|
|
4. **Knowledge graphs structure memory** - Queryable, auditable
|
|
5. **Rule engines enable transparent decisions** - Explainable by design
|
|
|
|
### For Timmy Specifically
|
|
|
|
| Component | Current | Proposed | Benefit |
|
|
|-----------|---------|----------|---------|
|
|
| Control | Simple loop | FSM | Recovery, verification |
|
|
| Reasoning | MLX-only | Neuro-symbolic | Transparency, offline |
|
|
| Memory | Files | Knowledge graph | Queryable, structured |
|
|
| Learning | None | Symbolic from PRs | Continuous improvement |
|
|
| Cache | None | Hierarchical | Speed, reduced MLX calls |
|
|
|
|
### Next Steps
|
|
|
|
1. Implement core FSM (2 hours)
|
|
2. Create basic rule engine (4 hours)
|
|
3. Build knowledge graph schema (2 hours)
|
|
4. Integrate with existing heartbeat system (4 hours)
|
|
5. Test and iterate (ongoing)
|
|
|
|
---
|
|
|
|
**Research Complete**
|
|
|
|
*This report provides the foundation for expanding Timmy's capabilities using classical AI techniques that require no cloud connectivity while maintaining transparency and verifiability.*
|