AWS "Let's Build a Startup" live show
Twitch
•
Video • September 9, 2025 • Online
Live streaming session on AWS Twitch channel discussing startup building strategies and AWS services for entrepreneurs.
Live Stream
AWS
Startup
Twitch
Online
"Open, Private, Cost-Effective AI: Arcee.ai Foundation Models on Intel Xeon"
Intel AI Innovators Across Industries Webinar
•
Webinar • August 12, 2025 • Online
Webinar presentation on Arcee Foundation Models (AFM)—a family of small, efficient language models delivering state-of-the-art AI quality on Intel Xeon CPUs. AFM provides comparable performance to much larger models while significantly lowering hosting costs and infrastructure complexity. Designed for enterprises that demand cost-efficiency, privacy, security, and regulatory compliance, AFM offers full transparency with open weights and architecture, eliminating vendor lock-in.
Webinar
Intel Xeon
Small Language Models
Enterprise AI
Cost-Effective AI
Online
Real-world Applications of Optimized Models on Arm with Meta, AWS, Arcee AI, AIZIP, and Stability
Arm Partner Summit
• Conference • August 6, 2025 • Cambridge, UK
Panel discussion on real-world applications of optimized models on Arm architecture, featuring collaboration with Meta, AWS, Arcee AI, AIZIP, and Stability. Exploring practical implementations and performance optimizations for AI models on Arm-based systems.
Conference
Panel
Arm
AI Optimization
Cambridge
Trying to figure out MCP by actually building an app from scratch with open source and SLMs
Europe's first developer conference dedicated to the Model Context Protocol (MCP). Presentation on building applications from scratch using open source tools and Small Language Models, exploring how MCP standardizes AI data integration and enables new possibilities for AI system development.
Conference
MCP
Small Language Models
Open Source
London
Building and working with Small Language Models
Paris AI, ML and Computer Vision Meetup
•
Meetup • July 16, 2025 •
Slides
Practical session on using small open-source language models (SLMs) in enterprise settings. Exploring modern workflows for adapting SLMs with domain-specific pre-training, instruction fine-tuning, and alignment. Introducing and demonstrating open-source tools such as DistillKit, Spectrum, and MergeKit, which implement advanced techniques crucial for achieving task-specific accuracy while optimizing computational costs. Also discussing models and solutions built by Arcee AI.
Meetup
Small Language Models
Enterprise AI
Paris
Implementing High-Quality and Cost-Efficient AI Applications with Small Language Models
Budapest ML 2025
•
Conference • June 17, 2025 • Budapest, Hungary
This session focuses on practical techniques for using small open-source language models (SLMs) in enterprise settings. We first highlight the limitations of proprietary models in terms of privacy, compliance, and cost. Then, we explore modern workflows for adapting SLMs with domain-specific pre-training, instruction fine-tuning, and alignment. Along the way, we will introduce and demonstrate open-source tools like DistillKit, Spectrum, and MergeKit, which implement advanced techniques that are critical in achieving task-specific accuracy while optimizing computational costs. We'll also discuss some of the models and enterprise solutions built by Arcee AI.
Conference
Small Language Models
Enterprise AI
Cost-Efficient AI
Budapest
Building an AI Retail Assistant at the Edge with SLMs on Intel CPUs
Featured in the Intel Showcase (#3035) at Cisco Live 2025, the Edge IQ Retail Assistant demonstrates how AI can transform retail operations without relying on GPUs. Powered by Intel Xeon 6 CPUs running in a Cisco UCS server, this technical demonstrator showcases a chatbot interface powered by open-source small language models and real-time data analytics. Store associates can interact naturally through voice or text, receiving immediate information about product availability from Chooch's inventory system or crowd density from WaitTime's analytics platform. The solution runs three sophisticated small language models entirely on Intel Xeon processors using OpenVINO optimization, highlighting the capabilities of modern CPU-based AI inference for edge computing applications.
Conference
Edge Computing
Small Language Models
Intel Xeon
Retail AI
Cisco
San Diego
AI and Machine Learning for Public Investors
The World Bank, Washington DC • Conference • June 9–13, 2025
Presentation on AI and machine learning applications for public investment and development projects.
Conference
Public Investment
AI Applications
Implementing High-Quality and Cost-Efficient AI Applications with Small Language Models
Session focusing on practical techniques for using small open-source language models (SLMs) for real-life projects. Covers limitations of proprietary models in terms of privacy, compliance, and cost, then explores modern workflows for adapting SLMs with domain-specific pre-training, instruction fine-tuning, and alignment. Introduces and demonstrates open-source Arcee AI SLM, and libraries like DistillKit, Spectrum, and MergeKit, which are critical in achieving task-specific accuracy while optimizing computational costs. Discussion includes why SLMs are a great fit for advanced scenarios like model routing and agentic workflows.
Conference
Small Language Models
Enterprise AI
Cost-Efficient AI
ODSC
Boston
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
AWS Summit Paris
•
Conference • April 9, 2025 • Paris, France
Session GAI307 focusing on optimizing AI/ML workloads on Amazon EC2 and AWS Graviton. Presentation covers Small Language Models (SLMs), AWS Graviton 4 performance benefits, cost optimization strategies, and practical demonstrations of Arcee AI models running on Graviton instances. Discussion includes quantization techniques, performance comparisons, and enterprise AI deployment recommendations.
Conference
AWS
Graviton
Small Language Models
AI Optimization
Paris
CloudFest
Europa-Park, Germany • Conference • March 19, 2025
Participation in CloudFest conference discussing cloud computing and AI technologies.
Conference
Cloud Computing
AI
Knowledge Distillation: Transferring Capabilities from Large to Small Language Models
AWS AI and Data Conference Ireland 2025
•
Conference • March 13, 2025 • Lyrath Convention Centre, Kilkenny, Ireland
Knowledge distillation transfers capabilities from large language models to smaller, faster models while maintaining performance. Organizations can achieve dramatic improvements in throughput and cost efficiency. Learn how to implement distillation using Amazon Bedrock or to build a custom solution on Amazon SageMaker. Julien Simon will showcase how Arcee AI uses distillation to develop industry-leading small language models (SLMs) based on open architectures. He will also introduce the open-source DistillKit library and demonstrate several newly distilled SLMs from Arcee AI.
Conference
Knowledge Distillation
Small Language Models
Amazon Bedrock
Amazon SageMaker
DistillKit
Arcee AI
Kilkenny
Conquer AI performance, cost, and scale with AWS AI chips
AWS EMEA Innovate Generative AI+Data Conference
•
Conference • Session GENAIT5S3-P • Level 200 • March 6, 2025 • Online
Generative AI promises to revolutionize industries, but its immense computational demands and escalating costs pose significant challenges. To overcome these hurdles, AWS designed and purpose built AI chips AWS Trainium and Inferentia. In this session, get a close look at the innovation across silicon, server, and datacenter and hear about how AWS customers built, deployed, and scaled foundation models across various products and services using AWS AI chips.
Online
AWS
AWS Trainium
AWS Inferentia
AI Chips
Foundation Models
EMEA
4th IFC Workshop on Data Science in Central Banking
Bank of Italy, Rome
•
Workshop • February 18–20, 2025 • Rome, Italy
Workshop co-hosted by the BIS Irving Fisher Committee on Central Bank Statistics and the Bank of Italy, focusing on generative Artificial Intelligence (AI) and its potential applications in central banking. The event emphasizes ongoing projects and exchange of experiences to foster in-house expertise and reduce reliance on external service providers. Topics include generative AI methods, cloud computing, open-source software for official statistics, data architectures, and addressing data privacy and security concerns in data-driven environments.
Workshop
Central Banking
Generative AI
Data Science
Bank of Italy
Rome
ODSC AI Builders Summit
Online • Conference • January 15–16, 2025
Virtual participation in ODSC AI Builders Summit focusing on AI development and implementation.
Online
AI Builders
ODSC