A proprietary, self-learning AI with vectorized Long-Term Memory for the "Digital Oasis" club environment (envyro.club).
Envyro-Core is a custom Transformer-based Language Model with an integrated Long-Term Memory system using PostgreSQL + pgvector. It implements a "Cognitive Loop" that queries its vector database for context before generating responses, enabling it to learn from every interaction.
- Custom Transformer Architecture: Built from scratch using PyTorch
- Vectorized Long-Term Memory: PostgreSQL + pgvector for semantic similarity search
- Cognitive Loop: AI recalls relevant memories before generating responses
- Weight Management: Xavier/He initialization with save/load capabilities
- Admiral System: God Mode access for neural weight management and knowledge pruning
- Custom Transformer-based LLM (PyTorch/NumPy)
- Multi-head attention mechanism
- Position-wise feed-forward networks
- Configurable depth and dimensions
- PostgreSQL with pgvector extension
- 1536-dimensional embeddings
- Cosine similarity search
- Role-based memory attribution (Admiral, User, Sprout)
- Python 3.8+
- PostgreSQL 12+ with pgvector extension
- CUDA (optional, for GPU acceleration)
- Install Dependencies
pip install -r requirements.txt- Set Up Database
# Create PostgreSQL database
createdb envyro
# Initialize database schema
psql -d envyro -f init_db.sql
# Create Admiral account (interactive)
python setup_admiral.py- Configure Environment
# Copy example environment file
cp .env.example .env
# Edit .env and set strong passwords
# IMPORTANT: Change all passwords before deployment!
nano .env- Launch Envyro Web Interface (Recommended)
# Start the web-based launcher
python launch.pyThe Envyro Web Launcher provides a browser-based interface to:
- Start/Stop Services: Control PostgreSQL and Envyro-Core containers
- Upload Files: Add files for AI processing via drag & drop
- Configure Settings: Manage environment variables and model parameters
- Monitor System: View real-time logs and system status
from envyro_core import EnvyroAI
from envyro_core.config import EnvyroConfig
# Initialize EnvyroAI with database
db_config = EnvyroConfig.get_db_config()
ai = EnvyroAI(
vocab_size=50000,
d_model=512,
n_heads=8,
n_layers=6,
db_config=db_config
)
# The Cognitive Loop: Recall + Generate
response = ai.cognitive_loop(
input_text="Tell me about the Digital Oasis",
use_memory=True
)
# Learn from interaction
ai.learn_from_interaction(
query="What is Envyro?",
response="Envyro is a self-learning AI ecosystem...",
user_role="admiral"
)python example.pyThe Envyro Web Launcher (python launch.py) provides four main tabs:
- Service Management: Start, stop, and restart individual services
- Status Monitoring: Real-time status of PostgreSQL and Envyro-Core containers
- Global Controls: Start/stop all services at once
- Visual Status: Color-coded service status indicators
- Drag & Drop Upload: Upload files by dragging them into the browser
- File Management: View, remove, and organize uploaded files
- AI Processing: Process uploaded files with Envyro AI
- Supported Formats: Text, Python, JSON, Markdown, PDF, and images
- Environment Variables: Edit database and model configuration
- Save/Load Settings: Persist configuration changes
- Reset to Defaults: Restore original settings
- Real-time Updates: Changes apply immediately
- System Output: View all launcher operations and logs
- Test Runner: Execute the comprehensive test suite
- Command History: Track all operations performed
- Auto-refresh: Logs update automatically
# Get neural network statistics
stats = ai.get_admiral_stats()
print(f"Total parameters: {stats['parameters']:,}")
# Save/Load neural weights
ai.save_weights("envyro_weights.pt")
ai.load_weights("envyro_weights.pt")
# Memory pruning
if ai.memory:
ai.memory.delete_memory(memory_id=123)
ai.memory.clear_all(confirm=True) # Dangerous!Envyro/
βββ envyro_core/ # Core AI package
β βββ __init__.py
β βββ envyro_ai.py # Main EnvyroAI class
β βββ config.py # Configuration
β βββ models/ # Neural network models
β β βββ __init__.py
β β βββ transformer.py # Custom Transformer
β βββ memory/ # Long-Term Memory
β β βββ __init__.py
β β βββ vector_memory.py
β βββ utils/ # Utilities
βββ requirements.txt # Python dependencies
βββ init_db.sql # Database schema
βββ example.py # Usage example
βββ README.md # This file
EnvyroAI's core interaction pattern:
- Recall: Query the vector database for relevant memories
- Context: Incorporate retrieved context into the prompt
- Generate: Use the Transformer to generate a response
- Learn: Store the interaction in Long-Term Memory
# Step 1: Recall
memories = ai.recall("What is consciousness?", top_k=5)
# Step 2-4: Cognitive Loop handles it all
response = ai.cognitive_loop("What is consciousness?")The Admiral has complete "God Mode" over:
- Neural Weights: Save, load, and prune model parameters
- Knowledge Base: Add, delete, and clear memories
- User Management: Control access privileges
Creating an Admiral Account:
- No default Admiral account exists for security
- Use
setup_admiral.pyto create an Admiral with a strong password - Minimum 8-character password required
id: Serial primary keycontent: Text contentembedding: Vector(1536) - semantic embeddingcreated_by: Creator role (admiral/user/sprout)created_at: Timestamp
id: Serial primary keyusername: Unique usernamepassword_hash: Password hashrole: User role (admiral/user/sprout)
Configure via environment variables or EnvyroConfig:
ENVYRO_VOCAB_SIZE: Vocabulary size (default: 50000)ENVYRO_D_MODEL: Model dimension (default: 512)ENVYRO_N_HEADS: Number of attention heads (default: 8)ENVYRO_N_LAYERS: Number of transformer layers (default: 6)ENVYRO_D_FF: Feed-forward dimension (default: 2048)ENVYRO_MAX_SEQ_LENGTH: Maximum sequence length (default: 512)ENVYRO_DROPOUT: Dropout rate (default: 0.1)
ENVYRO_DB_HOST: Database hostENVYRO_DB_PORT: Database portENVYRO_DB_NAME: Database nameENVYRO_DB_USER: Database userENVYRO_DB_PASSWORD: Database password
-
Admiral Account Setup
- Use
setup_admiral.pyto create Admiral account with strong password - No default Admiral account is created by init_db.sql
- Minimum 8 character password required
- Never use "admin" as the password
- Use
-
Password Hashing
-
Admiral password is stored as a bcrypt hash (cost factor 12, configurable via BCRYPT_COST_FACTOR)
-
Manual hash generation (secure method):
import bcrypt import getpass password = getpass.getpass("Enter password: ") print(bcrypt.hashpw(password.encode(), bcrypt.gensalt(12)).decode())
-
-
Database Security
- Use environment variables for database credentials
- Copy
.env.exampleto.envand set strong passwords - Never commit
.envto version control (already in.gitignore) - Restrict database access to specific IPs in production
-
Docker Security
- Set
POSTGRES_PASSWORDenvironment variable - Use Docker secrets in production
- Don't use default credentials from docker-compose.yml
- Set
-
API Security (Future)
- Implement JWT authentication
- Use HTTPS/TLS for all connections
- Rate limit API endpoints
- Validate and sanitize all user inputs
docker-compose up -d- Install PostgreSQL with pgvector
- Install Python dependencies
- Initialize database
- Configure environment
- Run FastAPI server (integration required)
- Core Transformer architecture
- Weight initialization
- Vector memory integration
- Cognitive Loop implementation
- Tokenizer implementation
- Training pipeline
- FastAPI REST API
- React/Tailwind UI
- Rust backend integration
- Docker orchestration
- Production deployment
- Neural Engine: Python, PyTorch, NumPy
- Memory: PostgreSQL, pgvector
- Backend: Rust (planned), FastAPI
- Frontend: React, Tailwind (planned)
- Deployment: Docker, Nginx SSL
- The current embedding implementation is a placeholder. In production, use proper embedding models (e.g., sentence-transformers).
- Tokenization is not yet implemented. The generate function is a placeholder.
- This is the core neural engine. API and UI integration are separate components.
Proprietary - All rights reserved.
For inquiries about the Envyro project, visit: envyro.club
Next-Generation Post-Containerization Engine
A zero-trust, high-concurrency container runtime that replaces traditional Docker daemons with a multi-language architecture built for performance and security.
# 1. Initialize a new environment
enviro init --name my-app
# 2. Edit the generated Envirofile.toml (see below)
# 3. Build it
enviro build
# 4. Run it
enviro run
# 5. Share it with the world
enviro push| Command | Description |
|---|---|
enviro init |
Scaffold a new Envirofile.toml in the current directory |
enviro build |
Build an environment from an Envirofile |
enviro run |
Run an environment (use -d for detached mode) |
enviro stop <id> |
Stop a running environment |
enviro ps |
List running environments (-a for all) |
enviro logs <id> |
View environment logs (-f to follow) |
enviro push |
Push environment to the registry |
enviro pull <name> |
Pull environment from the registry |
enviro search <query> |
Search the environment registry |
enviro registry list |
List locally stored environments |
enviro registry remove <name> |
Remove a local environment |
enviro metrics |
Show runtime performance metrics |
enviro validate |
Validate an Envirofile without building |
Instead of imperative Dockerfiles, Enviro uses declarative TOML:
# Envirofile.toml
[environment]
name = "my-web-app"
base = "ubuntu:22.04"
description = "A production web application"
version = "1.0.0"
tags = ["web", "python"]
[packages]
apt = ["nginx", "curl"]
pip = ["flask", "gunicorn"]
[env]
PORT = "8080"
APP_ENV = "production"
[run]
command = "gunicorn app:app"
workdir = "/opt/app"
ports = [8080]
[resources]
cpu = 2.0
memory = "1g"
pids = 200
[health]
command = "curl -f http://localhost:8080/health"
interval = 30
timeout = 10
retries = 3Why Envirofile is better than Dockerfile:
- Declarative: Say what you want, not how to build it
- Type-safe: TOML parsing catches errors before build time
- Resource-aware: CPU, memory, and PID limits are first-class
- Health checks built-in: No separate HEALTHCHECK instruction
- Registry-ready: Tags and metadata for instant sharing
Share and discover environments just like npm or crates.io:
# Push your environment
enviro push
# Search for environments
enviro search "web flask"
# Pull someone else's environment
enviro pull username/web-app
# List your local environments
enviro registry listEnviro combines the strengths of four languages:
- Rust π¦ - Async orchestration with
tokio, low-level Linux primitives vianix - Zig β‘ - High-speed syscall wrapping and custom memory allocation
- Go π· - gRPC control plane and eBPF networking
- Python π - Developer SDK with
PyO3for programmatic container definitions
enviro/
βββ enviro-core/ # Rust core runtime
β βββ src/
β β βββ engine/ # Isolation and namespace management
β β βββ executor/ # Language-agnostic execution trait
β β βββ ffi/ # Foreign function interface (Zig/Go)
β β βββ plugin/ # Dynamic plugin loading system
β β βββ envirofile.rs # Envirofile TOML parser
β β βββ registry.rs # Environment registry (push/pull/search)
β β βββ monitor.rs # Container monitoring and lifecycle
β β βββ main.rs # CLI with subcommands
β βββ build.rs # Orchestrates Zig + Go compilation
β
βββ enviro-zig/ # Zig C-ABI bridge
β βββ src/
β βββ oom_tuner.zig # OOM killer tuning
β βββ allocator.zig # Custom memory allocator
β
βββ enviro-go/ # Go control plane
β βββ pkg/
β βββ control/ # gRPC server
β βββ network/ # eBPF networking
β
βββ enviro-py/ # Python SDK
βββ enviro/ # High-level API
βββ examples/ # Example Envirofiles
The Executor trait allows any language that compiles to .so (shared library) or .wasm to integrate with Enviro:
#[async_trait]
pub trait Executor: Send + Sync {
async fn prepare(&mut self, ctx: &ExecutionContext) -> Result<()>;
async fn execute(&self, ctx: &ExecutionContext, command: &str, args: &[String]) -> Result<ExecutionResult>;
async fn cleanup(&mut self, ctx: &ExecutionContext) -> Result<()>;
fn executor_type(&self) -> &str;
}Superior security through user namespace mapping:
Outside Container: UID 1000 (unprivileged)
β mapping
Inside Container: UID 0 (root)
Even if an attacker gains root inside the container, they have no privileges on the host.
High-performance syscall wrapping with ~30% better performance than Rust's safe abstractions:
// Rust calls Zig for OOM tuning
pub fn tune_oom_killer(pid: u32, oom_score_adj: i32, enable: bool) -> Result<(), String>// Zig implementation - direct syscall, zero overhead
export fn zig_tune_oom_killer(config: OomConfig) c_int {
// Opens /proc/[pid]/oom_score_adj and writes value
// Total: 3 syscalls vs 5-7 in typical wrappers
}Hot-swappable executors via libloading:
let mut registry = PluginRegistry::new();
registry.load_plugin("zig-executor".to_string(), PathBuf::from("./plugins/zig_executor.so"))?;Replace static YAML with dynamic Python:
from enviro import Container, Envirofile
# Dynamic configuration based on environment
production = os.getenv("ENV") == "production"
web = Container(
name="web-app",
image="nginx:latest",
cpu=4.0 if production else 1.0,
memory="8GB" if production else "1GB"
)
if production:
web.replicas = 10
web.run()Checkpoint running containers and resume on different nodes:
// Executor trait supports checkpointing
async fn checkpoint(&self, ctx: &ExecutionContext, path: &str) -> Result<()>;
async fn restore(&mut self, ctx: &ExecutionContext, path: &str) -> Result<()>;- Rust 1.75+ (
rustc --version) - Zig 0.11+ (
zig version) - Go 1.21+ (
go version) - Python 3.8+ (
python --version)
The Rust build.rs automatically compiles Zig and Go:
cd enviro-core
cargo build --releaseThis produces:
target/release/enviro- Main binarytarget/release/libenviro_core.{a,so}- Rust library- Linked Zig static library (
libenviro_zig.a) - Linked Go shared library (
libenviro_go.so)
# Zig only
cd enviro-zig
zig build
# Go only
cd enviro-go
go build -buildmode=c-shared -o libenviro_go.so ./pkg/control
# Python SDK
cd enviro-py
pip install -e .# Rust tests
cd enviro-core
cargo test
# Zig tests
cd enviro-zig
zig build test
# Python SDK
cd enviro-py
python -m pytestPre-built binaries for Linux, macOS, and Windows are available in GitHub Releases.
See RELEASE.md for details on:
- Creating new releases
- Supported platforms
- Download verification
- Build configuration
Throughout the codebase, you'll find performance-first patterns:
- Zero-Copy Operations: Memory is passed by reference, not copied
- Lock-Free Data Structures: Where possible (executor registry)
- Lazy Initialization: Resources allocated only when needed
- Batch Operations: UID/GID mapping done in single syscall
- io_uring: For async I/O on Linux 5.1+ (planned)
- Thread-Per-Core: Architecture with work stealing (planned)
- Zero Trust by Default: All containers run in unprivileged user namespaces
- Capability Dropping: Minimize Linux capabilities
- Network Isolation: Each container in separate network namespace
- Mount Isolation: Private
/proc,/sys, and filesystem views - PID Isolation: Containers can't see host processes
use enviro_core::{Isolation, IsolationConfig};
#[tokio::main]
async fn main() -> Result<()> {
let isolation = Isolation::with_defaults();
isolation.create_user_namespace()?;
let mut cmd = Command::new("/bin/bash");
let child = isolation.exec_in_namespace(cmd)?;
Ok(())
}from enviro import Container
web = Container(
name="nginx",
image="nginx:latest",
cpu=2.0,
memory="4GB"
)
handle = web.run()
print(handle.logs())
handle.stop()Envyro's container runtime is designed for speed and efficiency:
These benchmarks measure Envyro's internal operations. Comparisons to Docker are projected targets based on architecture design and will be validated with end-to-end benchmarks once full system integration is complete.
| Operation | Envyro (measured) | Target vs Docker |
|---|---|---|
| Container context creation | ~3Β΅s | Significantly faster |
| Namespace setup (cached) | ~1Β΅s | Cached template reuse |
| Resource limit batch apply | ~6Β΅s | Batched operations |
- Buffer Pool: Zero-copy I/O with buffer reuse, eliminating allocation overhead
- Context Pool: Pre-allocated execution contexts with automatic recycling
- Copy-on-Write: Shared resources cloned only on mutation via
Arc-based CoW
- io_uring: Feature-gated async I/O for kernel-bypassing file operations (Linux 5.1+)
- Parallel Namespace Setup: Concurrent user/network/mount/PID namespace creation via
tokio::join! - Cached Namespace Templates: Pre-computed configurations with cache hit/miss tracking
- Lazy Initialization: Resources created on-demand via
OnceLock, reducing startup overhead - Lock-Free Registry:
RwLock-based concurrent executor registry for thread-safe access - Batched Resource Limits: Multiple cgroup operations collected and applied in a single pass
- Link-Time Optimization (LTO) enabled
- Single codegen unit for maximum optimization
- Symbol stripping in release builds
- Panic abort (no unwind tables)
Run benchmarks with:
cargo test --test benchmarks -- --ignoredSee ARCHITECTURE.md for detailed performance architecture.
- Core Rust runtime with namespace isolation
- Zig FFI bridge for OOM tuning
- Go gRPC control plane skeleton
- Python SDK with Envirofile support
- Plugin system for hot-swapping executors
- Advanced performance optimizations (io_uring, zero-copy, caching)
- Memory efficiency (pools, CoW, concurrent registry)
- Performance benchmarks and metrics
- CLI with subcommands (init, build, run, stop, ps, logs, push, pull, search)
- Declarative Envirofile format (TOML-based)
- Local environment registry (store, search, share)
- Container monitoring and lifecycle tracking
- Remote registry API (HTTP-based hub)
- Full CRIU checkpoint/restore implementation
- eBPF networking with XDP
- Hardware passthrough (GPU/NPU/FPGA)
- WebAssembly executor via wasmtime
- Distributed control plane with etcd
Active development happens on the dev branch. The main branch tracks stable releases.
# Clone and switch to the dev branch
git clone https://github.com/Deployed-Labs/Envyro.git
cd Envyro
git checkout devMIT OR Apache-2.0
Contributions welcome! This is a cutting-edge project exploring multi-language systems programming.
See CONTRIBUTING.md for guidelines.
For questions or help, open a Help issue.