What is Ollama?
Ollama is an open-source tool that allows you to run large language models (LLMs) locally on your machine. It simplifies the process of downloading, setting up, and interacting with various LLMs, making them accessible for developers without requiring extensive cloud infrastructure. This is particularly useful for applications that need AI capabilities without constant internet connectivity or for privacy-conscious projects.
Why Integrate Ollama with Flutter?
Integrating Ollama with Flutter opens up a world of possibilities for your mobile and web applications. You can leverage powerful AI features such as text generation, summarization, code completion, and more, directly within your Flutter app. This guide will walk you through the process of setting up and making API calls to an Ollama instance from your Flutter project.
Prerequisites
- Ollama installed and running on your local machine. You can download it from ollama.ai.
- A Flutter development environment set up.
- Basic understanding of Dart and Flutter.
Step 1: Install Ollama and Download a Model
First, ensure you have Ollama installed. Once installed, you need to download a model to interact with. Open your terminal and run:
ollama pull llama2
This command downloads the Llama 2 model. You can replace llama2 with any other available model on the Ollama library.
Step 2: Set Up Your Flutter Project
Create a new Flutter project or navigate to an existing one:
flutter create ollama_flutter_app
cd ollama_flutter_app
Step 3: Add the HTTP Package
To make API requests to Ollama, you’ll need the http package. Add it to your pubspec.yaml file:
dependencies:
flutter:
sdk: flutter
http: ^1.1.0
Then, run flutter pub get in your terminal.
Step 4: Create a Service to Interact with Ollama
Create a new Dart file (e.g., lib/ollama_service.dart) to handle the communication with the Ollama API. Ollama runs a local server, typically on http://localhost:11434. We’ll use the /api/generate endpoint.
import 'dart:convert';
import 'package:http/http.dart' as http;
class OllamaService {
final String _baseUrl = 'http://localhost:11434/api/generate';
Future<String> generateText(String prompt, {String model = 'llama2'}) async {
try {
final response = await http.post(
Uri.parse(_baseUrl),
headers: {"Content-Type": "application/json"},
body: json.encode({
'model': model,
'prompt': prompt,
'stream': false, // Set to true if you want to stream the response
}),
);
if (response.statusCode == 200) {
final data = json.decode(response.body);
return data['response'];
} else {
// Handle API errors
print('Error: ${response.statusCode}');
print('Response: ${response.body}');
return 'Failed to generate text. Error: ${response.statusCode}';
}
} catch (e) {
// Handle network or other exceptions
print('Exception: $e');
return 'An error occurred: $e';
}
}
}
Step 5: Use the Service in Your Flutter App
Now, you can use the OllamaService in your Flutter UI. Here’s an example of how to call it from a simple widget:
import 'package:flutter/material.dart';
import 'ollama_service.dart'; // Import your Ollama service
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Ollama Flutter Demo',
home: OllamaScreen(),
);
}
}
class OllamaScreen extends StatefulWidget {
const OllamaScreen({super.key});
@override
State<OllamaScreen> createState() => _OllamaScreenState();
}
class _OllamaScreenState extends State<OllamaScreen> {
final TextEditingController _promptController = TextEditingController();
String _generatedText = '';
bool _isLoading = false;
final OllamaService _ollamaService = OllamaService();
Future<void> _generateContent() async {
if (_promptController.text.isEmpty) return;
setState(() {
_isLoading = true;
_generatedText = '';
});
final String prompt = _promptController.text;
final String response = await _ollamaService.generateText(prompt);
setState(() {
_generatedText = response;
_isLoading = false;
});
}
@override
void dispose() {
_promptController.dispose();
super.dispose();
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: const Text('Ollama Flutter Integration'),
),
body: Padding(
padding: const EdgeInsets.all(16.0),
child: Column(
children: [
TextField(
controller: _promptController,
decoration: const InputDecoration(labelText: 'Enter your prompt'),
),
const SizedBox(height: 20),
ElevatedButton(
onPressed: _isLoading ? null : _generateContent,
child: _isLoading ? const CircularProgressIndicator() : const Text('Generate'),
),
const SizedBox(height: 20),
Expanded(
child: SingleChildScrollView(
child: Text(_generatedText),
),
),
],
),
),
);
}
}
Important Considerations
- Error Handling: The provided code includes basic error handling. You should implement more robust error management for production applications.
- Streaming Responses: For longer generations, consider setting
'stream': truein the API call and handling the streamed response in your Flutter app for a better user experience. This involves listening to the response chunk by chunk. - Model Management: Allow users to select different models or manage model downloads within your app if necessary.
- Security: If deploying to a production environment where Ollama is not running locally on the user’s device, ensure secure communication and proper authentication.
Conclusion
By following these steps, you can successfully integrate Ollama into your Flutter applications, bringing the power of local LLMs to your users. This opens up exciting avenues for creating intelligent and interactive user experiences.