Streamlining LLM Inference at the Edge with TFLite XNNPack, the default TensorFlow Lite CPU inference engine, has been updated to improve performance and memory management, allow cross-process collaboration, and simplify the user-facing API. Source: Google Developers Blog