Virtual Reality & Machine Learning / AI

Eanix hardware moves beyond traditional home server setups that typically use small computers such as the Raspberry Pi 4. These devices are great, but their capabilities are severely limited due to performance. Eanix uses consumer gaming/workstation hardware that has advanced over the years into a capable server alternative for the home. 

The purpose of the home server along with some of it’s more powerful features is to give you the ability to utilize Artificial Intelligence / Machine Learning as well as be able to play virtual reality games wirelessly. The idea is that if you want to be able to set up facial recognition for a camera outside your front door, you can. If you want to add a small screen with a camera in the kitchen to recognize food products and assist with recipes / ordering, which is included with some refrigerators, you can. 

Of course you will probably want any cameras you connect up to be done so without the internet, but that is still entirely doable with the advent of external (or internal) network cards that can connect to a private PoE (Power over Ethernet) switch that provides connectivity and power to the cameras. You would then be able to access them via a web interface by connecting to the server, but the cameras themselves would not be accessible by anyone.

In the virtual reality space, the meta-verse seems to be the “next big thing”, as stores and businesses rush to get set up in it. Eanix home server would be technically capable of supplying the graphics processing power to one or more virtual reality headsets. They would be equipped with the ability to transmit over 60Ghz or a similar frequency with technology from DisplayLink or similar, with hopes to support 4K VR @ 60fps or 2K VR @ 90fps.

With Nvidia graphics cards, you would have technology such as Raytracing which provides a much better lighting experience to the game. For a demo, check out the Minecraft Raytraced video on Youtube below. In addition, Nvidia has pre-trained models for recognizing content within a video stream on their cards, such as Nvidia’s FaceDetect. While not required, they would likely provide a better experience than some of the off-the-shelf alternatives. It would ultimately be up to the app you run, but the idea would be to also have a drag and drop editor using IFTTT with machine learning as inputs.

Leave a Reply

Your email address will not be published. Required fields are marked *