How far can you THINK?
The UNIVERSE was created for you to EXPLORE.
Why would you allow your thoughts to be constrained by what you can see from your vehicle?
Or … for that mattter … Why would you allow your thoughts, particularly your conceptual thinking, to be constrained by human-scale sensory experiences?
It’s not just the far reaches of outer space, there’s also plenty of room at the bottom … perhaps you are interested in exploring the nanoscale biological processes by advancing the limits of optical microscopy? We might ask, for example, “Are there little AFFORDABLE-FOR-THE-MASSES microscopy projects, for cellular and molecular-level digital optical microscopy, in case families want to share videos of their gut bacteria or new virus friends before/after the holidays on Instagram” … maybe that would lead to setting up a mini PC for a bit of an industrial machine vision application … in which a [global shutter camera sensor](an Arducam 2.3MP AR0234 Color Global Shutter is mounted in a modified structured light epifluorescent microscope … the objective would be achieving something like super resolution computational saturated absorption microscopy or maybe GPU-assisted total internal reflection fluorescence (TIRF) microscopy.
***How FAR down, up, sideways, backward, forward can you THINK?
You just can’t think that far in a vehicle … or a plane … or even riding around the Universe or going to Mars in a spaceship. If you really want to think very far, you are going to need to begin to master computational technology … PART of that is about eGPUs and rentable compute … … PART of it …
Using eGPUs in order to exploit the realm of rentable compute is about getting around better in the information realm … relying on people whose lame-thinking mindsets are still stuck in automotive culture, which is using vehicles or even airplanes, is just too damned slow, inefficient and geographically-constrained … it’s not just that information travels at the speed of light; it’s also that thought and human consciousnesss is increasingly global, ie somebody else in the world already has far better ideas than you can hope to have. In other words, the UNIVERSE of information is about HUMILITY and ditching the ignorance of thinking that everything has to be invented here.
eGPUs For Exploration Of The Limits Of Your Noosphere
The whole point of using desktop GPUs technology as a machine learning or personal AI sandbox is primarily about the extensibility of open source software which allows people to THINK more collaboratively, to learn, try things and develop the necessary code that affords them the freedom to customize and tailor their technology for their own needs … while thinking more creatively and collaborating with a very large community of other developers, and helping each other find solutions in an agile, flexible, rapid and secure manner … and you can bet your ass it’s a helluva lot more competitive than just thinking about your own stuff … the point of rentable compute and the realm of truly open AI is to LEARN with a minimal sandbox environment in order to be able to understand the fundamentals well exploit to the different options available in the realm of rentable compute.
Obviously, the multi-bajillion pound gorilla in this space is the Nvidia developer network and although it’s a good idea to virtually attend or be aware of things like Nvidia’s GTC … the amount of wealth that is at stake means that there is scramble to develop solid competing alternatives … and more competition is way, way, way better than less competition so it pays to understand the level of competition, ie to understand more than just the realm of Nvidia. Plenty of people [in addition to the trillion dollar monster NVidia itself] will make an excellent case for being proficient in understanding Nvidia’s offerings … so we’ll let them do that … and we have no argument against that … but IN THIS POST we’re going to focus on the other players in this space, ie AMD, Intel, and others, because … competition is good.
AMDs Radeon Open Compute (ROCm) collection includes drivers, development tools, and APIs that enable GPU programming from low-level kernel to end-user applications which are powered by AMD’s Heterogeneous-computing Interface for Portability (HIP) open-source software C++ GPU programming environment and its corresponding runtime. ROCm was originally developed as AMD’s Boltzmann Initiative and then finally productized in 2016 to provide an alternative to Nvidia’s CUDA which includes a tool to port CUDA source-code to portable (HIP) source-code which can be compiled on both AMD’s Heterogeneous Compute Compiler (HCC) and Nvidia CUDA Compiler (NVCC).
The whole point of even attempting to develop, test and deploy GPU accelerated HPC, AI, scientific computing, CAD, and other applications in a free, open-source, integrated and secure software ecosystem is to try to LEARN the hard way or to have a hands-on understanding of how AI is actually being done.
You won’t become the world’s top expert in eGPU technology … the fact of the matter is, almost nobody can really stay ahead of this rapidly expanding curve of knowledge development … but this is about knowing just enough to be able to make semi-informed decisions about the CULTURE being made possible by AI and how shrewd, hardcore businesses are building out their AI strategy and what that will mean for culture.
In other words, instead of SQUANDERING money on vehicles and just mindlessly driving around, you really need to be able to start understanding how get yourself around computationally in the accelerated AI-ified information realm … because every $1000 worth of investment in rentable compute in 2025 informed by your own eGPU technology will take you much, much, much further than $1,000,000 worth of investment in automotive technology, that includes engine-powered technology, like farm equipment, earthmoving equipment or big trucks, or anything that still depends on an internal combustion engine or and electric motor that replaces that engine.
Your hardware strategy for your little eGPU learning sandbox
Oculink 2, PICe 6.0, NVLink, CXL, Terahertz, GPU/TPU Virtualization and MUCH, MUCH, MUCH More!
While Oculink 2 is well known to followers of things like the eGPU forum and has established itself as the current champion for connecting external GPUs (eGPUs) to rackmount servers, the future holds some exciting possibilities for even faster and more efficient connections.
Here are a few potential contenders for the “next big thing” in eGPU technology:
-
PCI Express (PCIe) 6.0 and 7.0: The PCI-SIG is responsible for developing and enhancining the PCIe standard, crucial for ensuring compatibility and interoperability of eGPU/eTPU solutions. The next major revision, 7.0 of PCIe is already on the horizon and the not-quite-available BUT close-to-being-delivered standard PCIe 6.0 will deliver significant bandwidth improvements over PCIe 5.0, currently employed by Oculink 2. Optimizations of PCIe 6.0 could actually double the theoretical bandwidth yet again, potentially exceeding the expected 128 GB/s per lane. While motherboards and eGPUs supporting PCIe 6.0 will take some time to become fully mainstream, it represents a clear evolutionary step for high-performance computing requiring massive GPU acceleration.
-
NVLink and other proprietary, eg AMD, Intel, Broadcom, etal, high-speed interconnects: Manufacturers like NVIDIA [and tons of would-be NVIDIA competitors and startups hoping to be acquired] are hyper-motivated by the massive investor interest in developing their own interconnects like NVLink, offering even higher bandwidth and lower latency compared to PCIe. While currently limited to specific systems and GPUs, advancements in these proprietary technologies could challenge the dominance of PCIe in the future. The potential downside of proprietary solutions is their limited compatibility and potential vendor lock-in, requiring careful consideration by users.
-
CXL (Compute Express Link): Compute Express Link™ (CXL™) is an industry-supported standard specification, currently at rev 3.1 cache-coherent interconnect for processors, memory expansion and accelerators which focuses on providing a high-bandwidth, low-latency link between CPUs and accelerators like GPUs. While not directly replacing eGPU connections, CXL could enable tighter integration and efficient data sharing between the CPU and GPU, unlocking additional performance gains. CXL is still in its early stages of adoption, but its potential for optimizing data movement and communication within the server could significantly impact future eGPU implementations.
-
Wireless 6G Technologies such Terahertz (THz): Of course, this might SEEM like it’s unduly futuristic, advancements in wireless communication technologies like Terahertz (THz) and other advances in multiplexing and other things the application of information theory could eventually pave the way for truly cable-free eGPU connections. JUST IMAGINE the flexibility and efficiency of eliminating bulky cables without compromising performance. These option remains far from practical implementation, but the potential long-term benefits for not just simply for data center design and maintenance on Earth, but Universal computing by satellites and exploratory vehicles in deep space are undeniable.
-
Hardware Architecture in general: It’s not just about GPUS, TPUs, accelerators, and other hardware components … it’s tough to predict the economic importance of a specific different developments in hardware architecture or the manufacturing technologies necessary to realize those architectures … but there are plenty of research institutions that not only do theoretical research in hardware, but are also particularly motivated by practical use cases for GPU/TPU clusters in their other fields of research … of course, this includes those such as Lawrence Livermore National Laboratory and Max Planck Institute for Computer Science, but practically it’s basically every research institution in the world with something like a computer engineering departrment on campus … moreover, there are plenty of on-going academic research projects from a variety of different quarters publishing pre-print papers in this space … so hardware architecture is just about things like quantum computing or RISC-V open source processor architecture but also plenty of other different new theoretical-yet-close-to-practical developments in parallel computing.
Open Source Communities
There are ALSO several relevant Open Hardware/Open Software Communities such as:
-
Open Compute Project: Open-source community focused on developing efficient server and data center designs, with potential applications for eGPU/eTPU connectivity.
-
OpenPOWER Foundation: Open-source community focused on developing high-performance computing solutions, with potential applications for eGPU/eTPU connectivity.
-
OpenACC: Open-source community focused on developing high-performance computing solutions, with potential applications for eGPU/eTPU connectivity.
-
OpenMP: Open-source community focused on developing OpenMP API specification for parallel programming as well as other high-performance computing solutions, with potential applications for eGPU/eTPU connectivity.
-
OpenCL: Open-source community focused on developing high-performance computing solutions, with potential applications for eGPU/eTPU connectivity.
-
OVS (Open Virtual Switch): Open-source project for production quality, multilayer open virtual switch which would provide high-performance virtual networking solutions, potentially enabling efficient data transfer for virtualized GPUs/TPUs.
-
DPDK (Data Plane Development Kit): Open-source framework for high-performance networking applications, with potential applications in optimizing network traffic for eGPUs/eTPUs.
Software and GPU/TPU Virtualization
The rise of GPU/TPU virtualization and cloud-based access to GPUs could transform the eGPU landscape. Instead of physical connections, users might access remote GPUs through virtualized pools, simplifying management and scaling based on demand. This shift would require robust virtualization technologies and efficient network infrastructures, but could dramatically change how businesses approach and utilize GPUs for various workloads.
Linux Kernel GPU Virtualization: The Linux kernel Direct Rendering Manager (DRM) layer contains code intended to support the needs of complex graphics devices, usually containing programmable pipelines well suited to 3D graphics acceleration. Graphics drivers in the kernel may make use of DRM functions to make tasks like memory management, interrupt handling and Direct Memory Access (DMA) easier, and provide a uniform interface to applications. The DRM layer contains code intended to support the needs of complex graphics devices, usually containing programmable pipelines well suited to 3D graphics acceleration. Graphics drivers in the kernel may make use of DRM functions to make tasks like memory management, interrupt handling and DMA easier, and provide a uniform interface to applications.
Conclusion … why curated AWESOME lists and RSS feeds matter
A rapidly developing research and development space such as eGPUs/eTPUs connectivity illustrates why it is generally important to curate repositories of lists of AWESOME blogs and online publications by industry experts and research institutions in this space to stay up-to-date on the latest trends and advancements … by following these publications, or the abstracts in different RSS feeds, one is better able to an eye on key researchers/labs involved that developing as well as the conferences and workshops focused on HPC, cloud computing, and server technologies, as these often serve as a place for new ideas in cutting-edge research and development in eGPU/eTPU connectivity to emerge.