Artificial Intelligence (AI) is transforming the way we build applications, and integrating AI capabilities into mobile apps has never been easier. In this blog post, we’ll walk through how to create a Flutter app that leverages the Ollama model to provide AI-powered functionality. Ollama is a lightweight framework for running large language models (LLMs) locally, making it an excellent choice for building privacy-focused and offline-capable AI apps.
By the end of this tutorial, you’ll have a fully functional Flutter app that interacts with an AI model to generate responses based on user input.
What is Ollama?
Ollama is a lightweight runtime for running large language models (LLMs) like Llama, Mistral, or other open-source models. It allows developers to run these models locally on their devices, ensuring data privacy and eliminating the need for an internet connection. Ollama simplifies the process of deploying and interacting with AI models by providing a simple API interface.
In this example, we’ll use Ollama to power a chatbot-like interface in our Flutter app. The app will send user input to the Ollama server and display the AI-generated response.
Prerequisites
Before we begin, ensure you have the following set up:
- Flutter SDK: Install Flutter from flutter.dev.
- Ollama Installed: Follow the instructions on the Ollama GitHub page to install and set up Ollama on your machine.
- Dart HTTP Package: Add the
http
package to your Flutter project for making API requests.
Add the following to your pubspec.yaml
file:
dependencies:
flutter:
sdk: flutter
http: ^0.15.0
Run flutter pub get
to install the dependencies.
Step 1: Setting Up Ollama
- Install a Model: Use Ollama to download and run a pre-trained model. For example, to install the Llama 2 model, run the following command in your terminal:
ollama pull llama2
- Start the Ollama Server: Run the Ollama server to host the model locally:
ollama serve
By default, the server runs on http://localhost:11434
.
- Test the API: You can test the Ollama API using
curl
. For example:
curl http://localhost:11434/api/generate -d '{"model": "llama2", "prompt": "Explain AI in one sentence."}'
This should return a JSON response with the AI-generated text.
Step 2: Creating the Flutter App
1. Project Structure
Create a new Flutter project:
flutter create flutter_ai_ollama_app
cd flutter_ai_ollama_app
Replace the contents of lib/main.dart
with the following code:
import 'package:flutter/material.dart';
import 'package:http/http.dart' as http;
import 'dart:convert';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter AI Chat',
theme: ThemeData(primarySwatch: Colors.blue),
home: AiChatScreen(),
);
}
}
class AiChatScreen extends StatefulWidget {
@override
_AiChatScreenState createState() => _AiChatScreenState();
}
class _AiChatScreenState extends State<AiChatScreen> {
final TextEditingController _controller = TextEditingController();
List<String> _messages = [];
bool _isLoading = false;
Future<void> _sendMessage(String prompt) async {
setState(() {
_messages.add('You: $prompt');
_isLoading = true;
});
try {
final response = await http.post(
Uri.parse('http://localhost:11434/api/generate'),
headers: {'Content-Type': 'application/json'},
body: jsonEncode({
'model': 'llama2',
'prompt': prompt,
}),
);
if (response.statusCode == 200) {
final jsonResponse = jsonDecode(response.body);
final aiResponse = jsonResponse['response'];
setState(() {
_messages.add('AI: $aiResponse');
});
} else {
setState(() {
_messages.add('AI: Error generating response.');
});
}
} catch (e) {
setState(() {
_messages.add('AI: Failed to connect to the server.');
});
} finally {
setState(() {
_isLoading = false;
});
}
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text('Flutter AI Chat')),
body: Column(
children: [
Expanded(
child: ListView.builder(
itemCount: _messages.length,
itemBuilder: (context, index) {
return Padding(
padding: const EdgeInsets.all(8.0),
child: Text(_messages[index]),
);
},
),
),
Padding(
padding: const EdgeInsets.all(8.0),
child: Row(
children: [
Expanded(
child: TextField(
controller: _controller,
decoration: InputDecoration(
hintText: 'Type your message...',
),
),
),
IconButton(
onPressed: _isLoading
? null
: () {
final prompt = _controller.text.trim();
if (prompt.isNotEmpty) {
_sendMessage(prompt);
_controller.clear();
}
},
icon: Icon(Icons.send),
),
],
),
),
],
),
);
}
}
Step 3: Running the App
- Start the Ollama server on your machine.
- Run the Flutter app on an emulator or physical device:
flutter run
- Type a message in the app’s input field and press the send button. The app will send the message to the Ollama server and display the AI-generated response.
Step 4: Customizing the App
1. Adding Loading Indicators
To improve user experience, show a loading spinner while waiting for the AI response. Update the UI to include a CircularProgressIndicator
when _isLoading
is true
.
2. Supporting Multiple Models
You can allow users to select different models (e.g., Llama, Mistral) by modifying the API request payload. Add a dropdown menu to let users choose their preferred model.
3. Styling the Chat Interface
Enhance the chat interface by adding avatars, timestamps, and better styling for messages.
Conclusion
In this tutorial, we built a Flutter app that integrates with the Ollama framework to provide AI-powered chat functionality. By leveraging Ollama’s local AI models, we ensured that the app is both privacy-focused and capable of running offline.
This example demonstrates the potential of combining Flutter with AI frameworks like Ollama to create innovative and user-friendly applications. You can extend this app further by adding features like voice input, multi-language support, or even integrating additional AI models.
Final Note: For more advanced use cases, explore the Ollama documentation and experiment with other models to enhance your app’s capabilities. Happy coding!