ML-Unifying Companies#


Quansight was founded by Travis Oliphant, a leader in the Python Data community who has authored or led the creation of industry cornerstones such as NumPy, SciPy, Numba, and Conda, and helped establish NumFOCUS and the PyData conference series. Through consulting services, Quansight provides the additional people and expertise needed to deploy new technology, solve complex problems, or optimize what is already in place, so that it runs faster and uses less memory. They work with data engineering, DevOps, data science, MLOps, and analytics teams to improve their performance. They provide services for Data Engineering & MLOps, Infrastructure, Scaling & Acceleration, Visualization & Dashboards, Open Source Integration, Algorithms, AI & Machine Learning, Packaging & Environment Management, and Jupyter Technologies. They are the creators of the Array API Standard.


Modular is a Startup company founded by the creators of MLIR. Their observation is that fragmentation and technical complexity have held back the impact of AI to a privileged few. The rest of the world isn’t benefiting as it should be from this transformational technology. Their mission is to have a real, positive impact in the world by reinventing the way AI technology is developed and deployed into production with a next-generation developer platform. There are very little extra details about their developer platform, but presumably it will provide a modular solution at a relatively low level of abstraction, given the LLVM and MLIR background of the founders.


OctoML is a startup company founded by the creators of Apache TVM. Their mission is to make AI more sustainable and accessible, empowering more creators to harness the transformative power of ML to build intelligent applications. They focus on efficient model execution and automation to scale services and reduce engineering burden. Specifically, they enable models to run on a broad set of devices, making them easier to deploy without specialized skills. The services include inference on the cloud, the edge, and a variety of platform and hardware vendors. They strive to maximize performance, with very simple deployment and benchmarking features included.