Tech giants prefer to boast for trillion-parameter AI models that require huge and expensive GPU cluster. But Fastino Taking a different approach.
Palo Alto-based startup says it has discovered a new type of AI model architecture that is intentionally small and functioning. The models are so small that they have been trained with low-end gaming GPUs in a total of $ 100,000, Fastino says.
The method is drawing attention. Fastino has earned $ 17.5 million for the seed fund led by Khosla Venture, the first zealous investor of the famous Openai, Fastino exclusively told TechCrunch.
It brings the startup total funds to about $ 25 million. It collected $ 7 million in Microsoft’s VC Arm M12 last November and in a pre-bees round led by insights partners.
“Our models spend a fraction for training when departing flagship models on fast, more accurate and specific tasks,” Fastino’s CEO and co-founder Ash Luis said.
Fastino has created a suit of small models that sell it to enterprise customers. Each model focuses on a specific task that may be required to radically or summarize corporate documents.
Fastino is not yet publishing early metric or users, but it says its performance is at risk of primary users. For example, because they are very small, its models can provide a complete response to a single token, Luis told Techcunch, the Tech shows technology for a detailed answer to Millisecond.
TechCrunch event
Berkeley, CA
|
June 5
Book now
Whether Fastino will face the method is still a bit early. Enterprise AI space is crowded, such companies like quar and databrix have surpassed AI in certain activities. And the Enterprise-centric SATA model manufacturers, ethnographic and mistrals, also provide small models. It is not even a secret that the future of generator AI for enterprise is probably small, more centralized language models.
Time can be said, but the first vote of the confident vote from Khosla certainly does not harm. At present, Fastino says it is concentrated on creating a cutting-e-AI team. It is targeting researchers in top AI labs who do not create the largest model or beat benchmarks.
“Our recruitment strategy has been very much focused on researchers, which may have been the opposite of how the language models are being created at the moment,” Lewis says, Luis said.

