/ Inference and Prediction Research

Inference Compaction

AI models currently follow the inference scaling law, suggesting that intelligence improves with repeated inference. Increasing the context length allows inference without using Retrieval-Augmented Generation (RAG). Essentially, a longer context leads to higher intelligence.

The main problem with inference scaling is that it's slow. Modern models might require 24 hours of inference to reach a conclusion. Models with over two million tokens can take about ten minutes to respond. This is similar to the AI Deep Thought from The Hitchhiker's Guide to the Galaxy, which takes an enormous amount of time to produce answers. If you have follow-up questions, you must wait a long time for replies.

We propose inference compaction to solve this issue. Instead of inferring the answer directly, we infer the underlying theory needed to produce the answer. This is like how a scientist derives the mass-energy equivalence principle from experimental data, allowing others to use the equation without the scientist. Inference compaction includes methods like AI-driven paper writing, building physical models, and techniques such as knowledge distillation and symbolic regression.

Knowledge distillation transfers knowledge from large, slow models to smaller, faster ones without significant loss in performance, enabling faster reasoning.

Symbolic regression automatically discovers mathematical equations or physical laws from data, extracting underlying laws and representing them as theories.

Related Research Areas

Automated Scientific Discovery: Using AI to find new scientific insights from data.
Neural-Symbolic Approaches: Combining neural networks with symbolic reasoning to represent knowledge compactly.
Meta-Learning: Learning to adapt quickly to new tasks, achieving faster inference.
Model Compression and Optimization: Techniques like pruning and quantization to enhance inference speed.
Program Synthesis: Automatically generating programs from specifications or data.
Causal Inference and Discovery: Identifying causal relationships to build efficient models.
Model-Based Reinforcement Learning: Learning environment models to make efficient decisions.

The results of inference compaction aren't limited to mathematical formulas. After processing large amounts of data, inference compaction discovers underlying laws and formulates them into theories, which can be equations, programs, or fast models. For example, using the theory "Tokyo's summer traffic congestion can be estimated with 90% accuracy using yesterday's temperature and humidity," one can manually predict tomorrow's congestion.

Utils

The ipr module system, developed by inference and prediction research (ipr), provides a dynamic framework for integrating AI-powered functionality into applications. Each module follows a consistent interface pattern with input/output handling through io.py and main functionality in script.py.

AI Agent

The restaurant_reservation module encapsulates a restaurant reservation API, making it accessible through an LLM-powered translation layer. This approach enables seamless communication between scripts (agents) and the reservation API, simplifying integration and usage.

Install the module with:

ipr install restaurant_reservation

Usage example:

restaurant_reservation("book a table for 4 at 7 PM tomorrow at a sushi restaurant in downtown")

Output:

Reservation Details: - Restaurant: Sushi Den - Time: 7:00 PM - Date: Tomorrow (2024-12-07) - Guests: 4 - Status: Confirmed

Every ipr module follows a consistent structure:

module_name/ ├── script.py # Main functionality └── io.py # LLM to absorb interface difference

The io.py file handles standardized text input/output, often integrating with LLM APIs like OpenAI. The script.py contains the core functionality, such as web automation with Selenium in the google_search example.

ipr can generate modules on-the-fly when they don't exist. For example:

ipr install pdf_chunk

Even though pdf_chunk doesn't exist in the module repository, ipr will generate a suitable implementation based on the module name and common patterns.

The meta_ipr module showcases ipr's ability to manage other modules:

ipr install meta_ipr

This meta-module can analyze tasks and automatically select and install appropriate ipr modules. For instance, if your task involves PDF processing and web scraping, meta_ipr might automatically install pdf_chunk and web_scraper modules.