Metadata-Version: 2.1 | |
Name: mlx | |
Version: 0.0.7 | |
Summary: A framework for machine learning on Apple silicon. | |
Author: MLX Contributors | |
Author-email: mlx@group.apple.com | |
Requires-Python: >=3.8 | |
Description-Content-Type: text/markdown | |
License-File: LICENSE | |
Provides-Extra: dev | |
Requires-Dist: pre-commit ; extra == 'dev' | |
Requires-Dist: pybind11-stubgen ; extra == 'dev' | |
Provides-Extra: testing | |
Requires-Dist: numpy ; extra == 'testing' | |
Requires-Dist: torch ; extra == 'testing' | |
# MLX | |
[**Quickstart**](#quickstart) | [**Installation**](#installation) | | |
[**Documentation**](https://ml-explore.github.io/mlx/build/html/index.html) | | |
[**Examples**](#examples) | |
[](https://circleci.com/gh/ml-explore/mlx) | |
MLX is an array framework for machine learning on Apple silicon, brought to you | |
by Apple machine learning research. | |
Some key features of MLX include: | |
- **Familiar APIs**: MLX has a Python API that closely follows NumPy. | |
MLX also has a fully featured C++ API, which closely mirrors the Python API. | |
MLX has higher-level packages like `mlx.nn` and `mlx.optimizers` with APIs | |
that closely follow PyTorch to simplify building more complex models. | |
- **Composable function transformations**: MLX supports composable function | |
transformations for automatic differentiation, automatic vectorization, | |
and computation graph optimization. | |
- **Lazy computation**: Computations in MLX are lazy. Arrays are only | |
materialized when needed. | |
- **Dynamic graph construction**: Computation graphs in MLX are constructed | |
dynamically. Changing the shapes of function arguments does not trigger | |
slow compilations, and debugging is simple and intuitive. | |
- **Multi-device**: Operations can run on any of the supported devices | |
(currently the CPU and the GPU). | |
- **Unified memory**: A notable difference from MLX and other frameworks | |
is the *unified memory model*. Arrays in MLX live in shared memory. | |
Operations on MLX arrays can be performed on any of the supported | |
device types without transferring data. | |
MLX is designed by machine learning researchers for machine learning | |
researchers. The framework is intended to be user-friendly, but still efficient | |
to train and deploy models. The design of the framework itself is also | |
conceptually simple. We intend to make it easy for researchers to extend and | |
improve MLX with the goal of quickly exploring new ideas. | |
The design of MLX is inspired by frameworks like | |
[NumPy](https://numpy.org/doc/stable/index.html), | |
[PyTorch](https://pytorch.org/), [Jax](https://github.com/google/jax), and | |
[ArrayFire](https://arrayfire.org/). | |
## Examples | |
The [MLX examples repo](https://github.com/ml-explore/mlx-examples) has a | |
variety of examples, including: | |
- [Transformer language model](https://github.com/ml-explore/mlx-examples/tree/main/transformer_lm) training. | |
- Large-scale text generation with | |
[LLaMA](https://github.com/ml-explore/mlx-examples/tree/main/llms/llama) and | |
finetuning with [LoRA](https://github.com/ml-explore/mlx-examples/tree/main/lora). | |
- Generating images with [Stable Diffusion](https://github.com/ml-explore/mlx-examples/tree/main/stable_diffusion). | |
- Speech recognition with [OpenAI's Whisper](https://github.com/ml-explore/mlx-examples/tree/main/whisper). | |
## Quickstart | |
See the [quick start | |
guide](https://ml-explore.github.io/mlx/build/html/quick_start.html) | |
in the documentation. | |
## Installation | |
MLX is available on [PyPI](https://pypi.org/project/mlx/). To install the Python API, run: | |
``` | |
pip install mlx | |
``` | |
Checkout the | |
[documentation](https://ml-explore.github.io/mlx/build/html/install.html#) | |
for more information on building the C++ and Python APIs from source. | |
## Contributing | |
Check out the [contribution guidelines](CONTRIBUTING.md) for more information | |
on contributing to MLX. See the | |
[docs](https://ml-explore.github.io/mlx/build/html/install.html) for more | |
information on building from source, and running tests. | |
We are grateful for all of [our | |
contributors](ACKNOWLEDGMENTS.md#Individual-Contributors). If you contribute | |
to MLX and wish to be acknowledged, please add your name to the list in your | |
pull request. | |
## Citing MLX | |
The MLX software suite was initially developed with equal contribution by Awni | |
Hannun, Jagrit Digani, Angelos Katharopoulos, and Ronan Collobert. If you find | |
MLX useful in your research and wish to cite it, please use the following | |
BibTex entry: | |
``` | |
@software{mlx2023, | |
author = {Awni Hannun and Jagrit Digani and Angelos Katharopoulos and Ronan Collobert}, | |
title = {{MLX}: Efficient and flexible machine learning on Apple silicon}, | |
url = {https://github.com/ml-explore}, | |
version = {0.0}, | |
year = {2023}, | |
} | |
``` | |