diff options
author | Tekky <98614666+xtekky@users.noreply.github.com> | 2024-10-30 09:57:55 +0100 |
---|---|---|
committer | Tekky <98614666+xtekky@users.noreply.github.com> | 2024-10-30 09:57:55 +0100 |
commit | 1443c60cc86f7f02cc6c7a4b2a31d6b1dad66a26 (patch) | |
tree | 435d831df4ad5c18839cfaa647f23e5be035cdd6 /docs | |
parent | implement direct import of `Client` without using `g4f.client` (diff) | |
parent | Merge pull request #2304 from kqlio67/main (diff) | |
download | gpt4free-1443c60cc86f7f02cc6c7a4b2a31d6b1dad66a26.tar gpt4free-1443c60cc86f7f02cc6c7a4b2a31d6b1dad66a26.tar.gz gpt4free-1443c60cc86f7f02cc6c7a4b2a31d6b1dad66a26.tar.bz2 gpt4free-1443c60cc86f7f02cc6c7a4b2a31d6b1dad66a26.tar.lz gpt4free-1443c60cc86f7f02cc6c7a4b2a31d6b1dad66a26.tar.xz gpt4free-1443c60cc86f7f02cc6c7a4b2a31d6b1dad66a26.tar.zst gpt4free-1443c60cc86f7f02cc6c7a4b2a31d6b1dad66a26.zip |
Diffstat (limited to 'docs')
-rw-r--r-- | docs/async_client.md | 29 | ||||
-rw-r--r-- | docs/client.md | 42 | ||||
-rw-r--r-- | docs/interference-api.md | 106 | ||||
-rw-r--r-- | docs/providers-and-models.md | 38 |
4 files changed, 164 insertions, 51 deletions
diff --git a/docs/async_client.md b/docs/async_client.md index 0c296c09..0719a463 100644 --- a/docs/async_client.md +++ b/docs/async_client.md @@ -10,6 +10,7 @@ The G4F async client API is designed to be compatible with the OpenAI API, makin - [Key Features](#key-features) - [Getting Started](#getting-started) - [Initializing the Client](#initializing-the-client) + - [Creating Chat Completions](#creating-chat-completions) - [Configuration](#configuration) - [Usage Examples](#usage-examples) - [Text Completions](#text-completions) @@ -51,6 +52,28 @@ client = Client( ) ``` + +## Creating Chat Completions +**Here’s an improved example of creating chat completions:** +```python +response = await async_client.chat.completions.create( + model="gpt-3.5-turbo", + messages=[ + { + "role": "user", + "content": "Say this is a test" + } + ] + # Add other parameters as needed +) +``` + +**This example:** + - Asks a specific question `Say this is a test` + - Configures various parameters like temperature and max_tokens for more control over the output + - Disables streaming for a complete response + +You can adjust these parameters based on your specific needs. ### Configuration @@ -164,7 +187,7 @@ async def main(): response = await client.images.async_generate( prompt="a white siamese cat", - model="dall-e-3" + model="flux" ) image_url = response.data[0].url @@ -185,7 +208,7 @@ async def main(): response = await client.images.async_generate( prompt="a white siamese cat", - model="dall-e-3", + model="flux", response_format="b64_json" ) @@ -217,7 +240,7 @@ async def main(): ) task2 = client.images.async_generate( - model="dall-e-3", + model="flux", prompt="a white siamese cat" ) diff --git a/docs/client.md b/docs/client.md index 08445402..388b2e4b 100644 --- a/docs/client.md +++ b/docs/client.md @@ -7,6 +7,7 @@ - [Getting Started](#getting-started) - [Switching to G4F Client](#switching-to-g4f-client) - [Initializing the Client](#initializing-the-client) + - [Creating Chat Completions](#creating-chat-completions) - [Configuration](#configuration) - [Usage Examples](#usage-examples) - [Text Completions](#text-completions) @@ -56,6 +57,28 @@ client = Client( # Add any other necessary parameters ) ``` + +## Creating Chat Completions +**Here’s an improved example of creating chat completions:** +```python +response = client.chat.completions.create( + model="gpt-3.5-turbo", + messages=[ + { + "role": "user", + "content": "Say this is a test" + } + ] + # Add any other necessary parameters +) +``` + +**This example:** + - Asks a specific question `Say this is a test` + - Configures various parameters like temperature and max_tokens for more control over the output + - Disables streaming for a complete response + +You can adjust these parameters based on your specific needs. ## Configuration @@ -129,7 +152,7 @@ from g4f.client import Client client = Client() response = client.images.generate( - model="dall-e-3", + model="flux", prompt="a white siamese cat" # Add any other necessary parameters ) @@ -139,6 +162,23 @@ image_url = response.data[0].url print(f"Generated image URL: {image_url}") ``` + +#### Base64 Response Format +```python +from g4f.client import Client + +client = Client() + +response = client.images.generate( + model="flux", + prompt="a white siamese cat", + response_format="b64_json" +) + +base64_text = response.data[0].b64_json +print(base64_text) +``` + ### Creating Image Variations diff --git a/docs/interference-api.md b/docs/interference-api.md index 4050f84f..2e18e7b5 100644 --- a/docs/interference-api.md +++ b/docs/interference-api.md @@ -1,23 +1,30 @@ - # G4F - Interference API Usage Guide - + ## Table of Contents - [Introduction](#introduction) - [Running the Interference API](#running-the-interference-api) - [From PyPI Package](#from-pypi-package) - [From Repository](#from-repository) - - [Usage with OpenAI Library](#usage-with-openai-library) - - [Usage with Requests Library](#usage-with-requests-library) + - [Using the Interference API](#using-the-interference-api) + - [Basic Usage](#basic-usage) + - [With OpenAI Library](#with-openai-library) + - [With Requests Library](#with-requests-library) - [Key Points](#key-points) + - [Conclusion](#conclusion) + ## Introduction -The Interference API allows you to serve other OpenAI integrations with G4F. It acts as a proxy, translating requests to the OpenAI API into requests to the G4F providers. +The G4F Interference API is a powerful tool that allows you to serve other OpenAI integrations using G4F (Gpt4free). It acts as a proxy, translating requests intended for the OpenAI API into requests compatible with G4F providers. This guide will walk you through the process of setting up, running, and using the Interference API effectively. + ## Running the Interference API +**You can run the Interference API in two ways:** using the PyPI package or from the repository. + ### From PyPI Package -**You can run the Interference API directly from the G4F PyPI package:** +**To run the Interference API directly from the G4F PyPI package, use the following Python code:** + ```python from g4f.api import run_api @@ -25,37 +32,80 @@ run_api() ``` - ### From Repository -Alternatively, you can run the Interference API from the cloned repository. +**If you prefer to run the Interference API from the cloned repository, you have two options:** -**Run the server with:** +1. **Using the command line:** ```bash g4f api ``` -or + +2. **Using Python:** ```bash python -m g4f.api.run ``` +**Once running, the API will be accessible at:** `http://localhost:1337/v1` -## Usage with OpenAI Library +## Using the Interference API - +### Basic Usage +**You can interact with the Interference API using curl commands for both text and image generation:** +**For text generation:** +```bash +curl -X POST "http://localhost:1337/v1/chat/completions" \ + -H "Content-Type: application/json" \ + -d '{ + "messages": [ + { + "role": "user", + "content": "Hello" + } + ], + "model": "gpt-3.5-turbo" + }' +``` + +**For image generation:** +1. **url:** +```bash +curl -X POST "http://localhost:1337/v1/images/generate" \ + -H "Content-Type: application/json" \ + -d '{ + "prompt": "a white siamese cat", + "model": "flux", + "response_format": "url" + }' +``` + +2. **b64_json** +```bash +curl -X POST "http://localhost:1337/v1/images/generate" \ + -H "Content-Type: application/json" \ + -d '{ + "prompt": "a white siamese cat", + "model": "flux", + "response_format": "b64_json" + }' +``` + + +### With OpenAI Library + +**You can use the Interference API with the OpenAI Python library by changing the `base_url`:** ```python from openai import OpenAI client = OpenAI( api_key="", - # Change the API base URL to the local interference API - base_url="http://localhost:1337/v1" + base_url="http://localhost:1337/v1" ) response = client.chat.completions.create( model="gpt-3.5-turbo", - messages=[{"role": "user", "content": "write a poem about a tree"}], + messages=[{"role": "user", "content": "Write a poem about a tree"}], stream=True, ) @@ -68,20 +118,20 @@ else: content = token.choices[0].delta.content if content is not None: print(content, end="", flush=True) -``` +``` -## Usage with Requests Library -You can also send requests directly to the Interference API using the requests library. +### With Requests Library -**Send a POST request to `/v1/chat/completions` with the request body containing the model and other parameters:** +**You can also send requests directly to the Interference API using the `requests` library:** ```python import requests url = "http://localhost:1337/v1/chat/completions" + body = { - "model": "gpt-3.5-turbo", + "model": "gpt-3.5-turbo", "stream": False, "messages": [ {"role": "assistant", "content": "What can you do?"} @@ -92,18 +142,20 @@ json_response = requests.post(url, json=body).json().get('choices', []) for choice in json_response: print(choice.get('message', {}).get('content', '')) -``` - +``` ## Key Points -- The Interference API translates OpenAI API requests into G4F provider requests -- You can run it from the PyPI package or the cloned repository -- It supports usage with the OpenAI Python library by changing the `base_url` -- Direct requests can be sent to the API endpoints using libraries like `requests` + - The Interference API translates OpenAI API requests into G4F provider requests. + - It can be run from either the PyPI package or the cloned repository. + - The API supports usage with the OpenAI Python library by changing the `base_url`. + - Direct requests can be sent to the API endpoints using libraries like `requests`. + - Both text and image generation are supported. -**_The Interference API allows easy integration of G4F with existing OpenAI-based applications and tools._** +## Conclusion +The G4F Interference API provides a seamless way to integrate G4F with existing OpenAI-based applications and tools. By following this guide, you should now be able to set up, run, and use the Interference API effectively. Whether you're using it for text generation, image creation, or as a drop-in replacement for OpenAI in your projects, the Interference API offers flexibility and power for your AI-driven applications. + --- diff --git a/docs/providers-and-models.md b/docs/providers-and-models.md index a6d7ec4b..b3dbd9f1 100644 --- a/docs/providers-and-models.md +++ b/docs/providers-and-models.md @@ -51,6 +51,7 @@ This document provides an overview of various AI providers and models, including |[free.netfly.top](https://free.netfly.top)|`g4f.Provider.FreeNetfly`|✔|❌|❌|?|![Cloudflare](https://img.shields.io/badge/Cloudflare-f48d37)|❌| |[gemini.google.com](https://gemini.google.com)|`g4f.Provider.Gemini`|✔|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|✔| |[ai.google.dev](https://ai.google.dev)|`g4f.Provider.GeminiPro`|✔|❌|✔|?|![Active](https://img.shields.io/badge/Active-brightgreen)|✔| +|[app.giz.ai](https://app.giz.ai/assistant/)|`g4f.Provider.GizAI`|`gemini-flash, gemini-pro, gpt-4o-mini, gpt-4o, claude-3.5-sonnet, claude-3-haiku, llama-3.1-70b, llama-3.1-8b, mistral-large`|`sdxl, sd-1.5, sd-3.5, dalle-3, flux-schnell, flux1-pro`|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| |[developers.sber.ru](https://developers.sber.ru/gigachat)|`g4f.Provider.GigaChat`|✔|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|✔| |[gprochat.com](https://gprochat.com)|`g4f.Provider.GPROChat`|`gemini-pro`|❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| |[console.groq.com/playground](https://console.groq.com/playground)|`g4f.Provider.Groq`|✔|❌|❌|?|![Active](https://img.shields.io/badge/Active-brightgreen)|✔| @@ -63,10 +64,7 @@ This document provides an overview of various AI providers and models, including |[app.myshell.ai/chat](https://app.myshell.ai/chat)|`g4f.Provider.MyShell`|✔|❌|?|?|![Disabled](https://img.shields.io/badge/Disabled-red)|❌| |[nexra.aryahcr.cc/bing](https://nexra.aryahcr.cc/documentation/bing/en)|`g4f.Provider.NexraBing`|✔|❌|❌|✔|![Disabled](https://img.shields.io/badge/Disabled-red)|❌| |[nexra.aryahcr.cc/blackbox](https://nexra.aryahcr.cc/documentation/blackbox/en)|`g4f.Provider.NexraBlackbox`|`blackboxai` |❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| -|[nexra.aryahcr.cc/chatgpt](https://nexra.aryahcr.cc/documentation/chatgpt/en)|`g4f.Provider.NexraChatGPT`|`gpt-4, gpt-3.5-turbo, gpt-3` |❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| -|[nexra.aryahcr.cc/chatgpt](https://nexra.aryahcr.cc/documentation/chatgpt/en)|`g4f.Provider.NexraChatGPT4o`|`gpt-4o` |❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| -|[nexra.aryahcr.cc/chatgpt](https://nexra.aryahcr.cc/documentation/chatgpt/en)|`g4f.Provider.NexraChatGptV2`|`gpt-4` |❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| -|[nexra.aryahcr.cc/chatgpt](https://nexra.aryahcr.cc/documentation/chatgpt/en)|`g4f.Provider.NexraChatGptWeb`|`gpt-4` |❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| +|[nexra.aryahcr.cc/chatgpt](https://nexra.aryahcr.cc/documentation/chatgpt/en)|`g4f.Provider.NexraChatGPT`|`gpt-4, gpt-3.5-turbo, gpt-3, gpt-4o` |❌|❌|✔|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| |[nexra.aryahcr.cc/dall-e](https://nexra.aryahcr.cc/documentation/dall-e/en)|`g4f.Provider.NexraDallE`|❌|`dalle`|❌|❌|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| |[nexra.aryahcr.cc/dall-e](https://nexra.aryahcr.cc/documentation/dall-e/en)|`g4f.Provider.NexraDallE2`|❌|`dalle-2`|❌|❌|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| |[nexra.aryahcr.cc/emi](https://nexra.aryahcr.cc/documentation/emi/en)|`g4f.Provider.NexraEmi`|❌|`emi`|❌|❌|![Active](https://img.shields.io/badge/Active-brightgreen)|❌| @@ -108,18 +106,18 @@ This document provides an overview of various AI providers and models, including |-------|---------------|-----------|---------| |gpt-3|OpenAI|1+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-base)| |gpt-3.5-turbo|OpenAI|5+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-3-5-turbo)| -|gpt-4|OpenAI|33+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)| -|gpt-4-turbo|OpenAI|2+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)| -|gpt-4o|OpenAI|7+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o)| +|gpt-4|OpenAI|7+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)| +|gpt-4-turbo|OpenAI|3+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4)| +|gpt-4o|OpenAI|10+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o)| |gpt-4o-mini|OpenAI|14+ Providers|[platform.openai.com](https://platform.openai.com/docs/models/gpt-4o-mini)| |o1|OpenAI|1+ Providers|[platform.openai.com](https://openai.com/index/introducing-openai-o1-preview/)| -|o1-mini|OpenAI|1+ Providers|[platform.openai.com](https://openai.com/index/openai-o1-mini-advancing-cost-efficient-reasoning/)| +|o1-mini|OpenAI|2+ Providers|[platform.openai.com](https://openai.com/index/openai-o1-mini-advancing-cost-efficient-reasoning/)| |llama-2-7b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-2-7b)| |llama-2-13b|Meta Llama|1+ Providers|[llama.com](https://www.llama.com/llama2/)| |llama-3-8b|Meta Llama|4+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3/)| |llama-3-70b|Meta Llama|4+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3/)| |llama-3.1-8b|Meta Llama|7+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)| -|llama-3.1-70b|Meta Llama|13+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)| +|llama-3.1-70b|Meta Llama|14+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)| |llama-3.1-405b|Meta Llama|5+ Providers|[ai.meta.com](https://ai.meta.com/blog/meta-llama-3-1/)| |llama-3.2-1b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Llama-3.2-1B)| |llama-3.2-3b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/blog/llama32)| @@ -127,17 +125,17 @@ This document provides an overview of various AI providers and models, including |llama-3.2-90b|Meta Llama|2+ Providers|[ai.meta.com](https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/)| |llamaguard-7b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/LlamaGuard-7b)| |llamaguard-2-8b|Meta Llama|1+ Providers|[huggingface.co](https://huggingface.co/meta-llama/Meta-Llama-Guard-2-8B)| -|mistral-7b|Mistral AI|5+ Providers|[mistral.ai](https://mistral.ai/news/announcing-mistral-7b/)| +|mistral-7b|Mistral AI|4+ Providers|[mistral.ai](https://mistral.ai/news/announcing-mistral-7b/)| |mixtral-8x7b|Mistral AI|6+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-of-experts/)| |mixtral-8x22b|Mistral AI|3+ Providers|[mistral.ai](https://mistral.ai/news/mixtral-8x22b/)| -|mistral-nemo|Mistral AI|1+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)| -|mistral-large|Mistral AI|1+ Providers|[mistral.ai](https://mistral.ai/news/mistral-large-2407/)| +|mistral-nemo|Mistral AI|2+ Providers|[huggingface.co](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)| +|mistral-large|Mistral AI|2+ Providers|[mistral.ai](https://mistral.ai/news/mistral-large-2407/)| |mixtral-8x7b-dpo|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO)| |yi-34b|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B)| -|hermes-3|NousResearch|1+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B)| +|hermes-3|NousResearch|2+ Providers|[huggingface.co](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B)| |gemini|Google DeepMind|1+ Providers|[deepmind.google](http://deepmind.google/technologies/gemini/)| -|gemini-flash|Google DeepMind|3+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)| -|gemini-pro|Google DeepMind|9+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/pro/)| +|gemini-flash|Google DeepMind|4+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/flash/)| +|gemini-pro|Google DeepMind|10+ Providers|[deepmind.google](https://deepmind.google/technologies/gemini/pro/)| |gemma-2b|Google|5+ Providers|[huggingface.co](https://huggingface.co/google/gemma-2b)| |gemma-2b-9b|Google|1+ Providers|[huggingface.co](https://huggingface.co/google/gemma-2-9b)| |gemma-2b-27b|Google|2+ Providers|[huggingface.co](https://huggingface.co/google/gemma-2-27b)| @@ -145,10 +143,10 @@ This document provides an overview of various AI providers and models, including |gemma-2|Google|2+ Providers|[huggingface.co](https://huggingface.co/blog/gemma2)| |gemma_2_27b|Google|1+ Providers|[huggingface.co](https://huggingface.co/blog/gemma2)| |claude-2.1|Anthropic|1+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-2)| -|claude-3-haiku|Anthropic|3+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-haiku)| +|claude-3-haiku|Anthropic|4+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-haiku)| |claude-3-sonnet|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-family)| |claude-3-opus|Anthropic|2+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-family)| -|claude-3.5-sonnet|Anthropic|5+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-5-sonnet)| +|claude-3.5-sonnet|Anthropic|6+ Providers|[anthropic.com](https://www.anthropic.com/news/claude-3-5-sonnet)| |blackboxai|Blackbox AI|2+ Providers|[docs.blackbox.chat](https://docs.blackbox.chat/blackbox-ai-1)| |blackboxai-pro|Blackbox AI|1+ Providers|[docs.blackbox.chat](https://docs.blackbox.chat/blackbox-ai-1)| |yi-1.5-9b|01-ai|1+ Providers|[huggingface.co](https://huggingface.co/01-ai/Yi-1.5-9B)| @@ -196,11 +194,12 @@ This document provides an overview of various AI providers and models, including ### Image Models | Model | Base Provider | Providers | Website | |-------|---------------|-----------|---------| -|sdxl|Stability AI|2+ Providers|[huggingface.co](https://huggingface.co/docs/diffusers/en/using-diffusers/sdxl)| +|sdxl|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/docs/diffusers/en/using-diffusers/sdxl)| |sdxl-lora|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/blog/lcm_lora)| |sdxl-turbo|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/stabilityai/sdxl-turbo)| |sd-1.5|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/runwayml/stable-diffusion-v1-5)| |sd-3|Stability AI|1+ Providers|[huggingface.co](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/stable_diffusion_3)| +|sd-3.5|Stability AI|1+ Providers|[stability.ai](https://stability.ai/news/introducing-stable-diffusion-3-5)| |playground-v2.5|Playground AI|1+ Providers|[huggingface.co](https://huggingface.co/playgroundai/playground-v2.5-1024px-aesthetic)| |flux|Black Forest Labs|2+ Providers|[github.com/black-forest-labs/flux](https://github.com/black-forest-labs/flux)| |flux-pro|Black Forest Labs|2+ Providers|[github.com/black-forest-labs/flux](https://github.com/black-forest-labs/flux)| @@ -210,10 +209,9 @@ This document provides an overview of various AI providers and models, including |flux-disney|Flux AI|1+ Providers|[]()| |flux-pixel|Flux AI|1+ Providers|[]()| |flux-4o|Flux AI|1+ Providers|[]()| -|flux-schnell|Black Forest Labs|1+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-schnell)| +|flux-schnell|Black Forest Labs|2+ Providers|[huggingface.co](https://huggingface.co/black-forest-labs/FLUX.1-schnell)| |dalle|OpenAI|1+ Providers|[openai.com](https://openai.com/index/dall-e/)| |dalle-2|OpenAI|1+ Providers|[openai.com](https://openai.com/index/dall-e-2/)| -|dalle-3|OpenAI|2+ Providers|[openai.com](https://openai.com/index/dall-e-3/)| |emi||1+ Providers|[]()| |any-dark||1+ Providers|[]()| |midjourney|Midjourney|1+ Providers|[docs.midjourney.com](https://docs.midjourney.com/docs/model-versions)| |