What are some other AI models (post LLM, not using LLM methods) that go beyond the current high-end LLMs and LLMs on the near horizon but are not proven to work yet?

Several experimental AI architectures and paradigms push beyond current post-LLM models but remain unproven at scale. These speculative approaches aim to address transformer limitations through radical structural innovations:

  1. Quantum-Integrated Architectures

Early prototypes combine quantum computing principles with neural networks for optimization tasks[1][2].

Challenges include qubit instability and error correction, with hardware colocation in supercomputers being explored[2].

  1. Liquid Neural Networks

Dynamic networks that continuously adapt weights during deployment, addressing transformers’ static parameter limitation[3].

Potential applications: Real-time control systems and robotics requiring continuous learning[4].

  1. Self-Improving AGI Prototypes

Systems using automated synthetic data generation loops to refine capabilities without human intervention[1][5].

Early research shows promise in mathematical theorem proving but lacks real-world validation[5][6].

  1. Neuro-Symbolic Integration

Hybrid systems combining neural networks with symbolic reasoning engines[7][4].

Theoretical benefits: Enhanced interpretability and causal reasoning for scientific discovery[7].

  1. Brain-Inspired Computing Models

Spiking neural networks mimicking biological neuron communication[1][7].

Energy-efficient designs paired with neuromorphic chips (e.g., Intel Loihi) remain experimental[7].

  1. Photonic Computing Designs

Optical neural networks using light instead of electrons for low-power processing[2][5].

Early prototypes achieve 10x energy efficiency gains but lack software ecosystems[2].

  1. Model Collaboration Ecosystems

Federated systems where specialized models co-evolve through shared parameter spaces[7].

Example: Meteorological models combining weather prediction with energy grid optimization[7].

These approaches face significant hurdles in scalability, reproducibility, and integration with existing infrastructure. While promising in theory, none have yet demonstrated transformer-level performance across diverse tasks. Research priorities include developing standardized benchmarks for cross-architecture comparisons[3][7].

Model Estimated Reasoning Capability Likelihood of Success Predicted Time to Market
Model Collaboration Ecosystems High (cross-domain synthesis) Med-High 2025-2026
Neuro-Symbolic Integration High (causal reasoning) Medium 2026-2028
Brain-Inspired Computing Models Medium (specialized tasks) Medium 2025-2027
Liquid Neural Networks Medium (adaptive control) Medium 2027-2028
Photonic Computing Designs Medium (efficiency-focused) Low 2028+
Quantum-Integrated Architectures High (theoretical potential) Low 2028+
Self-Improving AGI Prototypes High (autonomous refinement) Low 2028+

 
Key Insights:

  1. Near-Term Focus: Model collaboration systems lead due to incremental improvements over existing federated learning frameworks, with early prototypes already deployed in weather prediction and supply chain optimization.
  2. Reasoning vs Practicality: Neuro-symbolic approaches show strong reasoning potential but face integration challenges between neural and symbolic components.
  3. Hardware Dependency: Photonic and quantum models remain constrained by current immature supporting infrastructure despite theoretical advantages.

  1. https://www.linkedin.com/pulse/future-large-language-models-llms-2025-beyond-rahul-chaube-cfkac
  2. https://blogs.nvidia.com/blog/generative-ai-predictions-2025-humanoids-agents/
  3. https://www.forbes.com/sites/robtoews/2023/09/03/transformers-revolutionized-ai-what-will-replace-them/
  4. https://www.linkedin.com/pulse/future-ai-beyond-transformers-robyn-le-sueur-swtqf
  5. https://www.gov.uk/government/publications/international-ai-safety-report-2025/international-ai-safety-report-2025
  6. https://www.chartis-research.com/points-of-view/7947299/no-surprises-2025-will-be-another-big-year-for-ai

https://www.eurekalert.org/news-releases/1076166

Watch the animated musical story / prediction of where things could be going and how we get there. Trigger warning: It starts out dark.

A music video of the future by Scott Howard Swain