Skip to main content

Hugging Face Blog Posts

23 articles on transformer models and the HF ecosystem

July 3, 2024

Accelerating Protein Language Model ProtST on Intel Gaudi 2

May 9, 2024

Building Cost-Efficient Enterprise RAG applications with Intel Gaudi 2 and Intel Xeon

March 20, 2024

A Chatbot on your Laptop: Phi-2 on Intel Meteor Lake

September 11, 2023

SafeCoder vs. Closed-source Code Assistants

July 14, 2023

Fine-tuning Stable Diffusion models on Intel CPUs

June 13, 2023

Hugging Face and AMD partner on accelerating state-of-the-art models for CPU and GPU platforms

May 23, 2023

Hugging Face Endpoints on Azure

May 23, 2023

Hugging Face and IBM partner on watsonx.ai, the next-generation enterprise studio for AI builders

May 16, 2023

Smaller is better: Q8-Chat, an efficient generative AI experience on Xeon

April 17, 2023

Accelerating Hugging Face Transformers with AWS Inferentia2

March 28, 2023

Accelerating Stable Diffusion Inference on Intel CPUs

March 1, 2023

How Hugging Face Accelerated Development of Witty Works Writing Assistant

February 21, 2023

Hugging Face and AWS partner to make AI more accessible

February 6, 2023

Accelerating PyTorch Transformers with Intel Sapphire Rapids, part 2

January 2, 2023

Accelerating PyTorch Transformers with Intel Sapphire Rapids, part 1

November 21, 2022

An Overview of Inference Solutions on Hugging Face

November 2, 2022

Accelerate your models with Optimum Intel and OpenVINO

October 14, 2022

Getting started with Hugging Face Inference Endpoints

August 18, 2022

Deep Dive: Vision Transformers On Hugging Face Optimum Graphcore

June 15, 2022

Intel and Hugging Face Partner to Democratize Machine Learning Hardware Acceleration

April 26, 2022

Getting Started with Transformers on Habana Gaudi

November 30, 2021

Getting Started with Hugging Face Transformers for IPUs with Optimum

November 19, 2021

Accelerating PyTorch distributed fine-tuning with Intel technologies