A specialized Large Language Model (LLM) designed for and by our Company. Trained on a dataset of 708 billion tokens, TEDDi™ performs well compared to models (circa 2024) on tasks while maintaining strong performance on general LLM benchmarks. Its 50 billion parameters and architecture based on BLOOM enable it to handle the complexities of data, including text classification, question answering, and natural language inference. Private and closed, TEDDi aims to enhance understanding of the present.
README: TEDDi Learning Rate