-
Notifications
You must be signed in to change notification settings - Fork 0
Home
What: A Flutter app for Android/iOS/Web that captures or selects images, uses OpenAI for initial analysis, detects any QR codes, then re-analyzes the decoded text to highlight potential security risks. Why: Many QR codes hide malicious links or actions. The app helps users understand what a QR does before acting on it. Key Features: Camera capture, gallery import, AI-based text analysis, and modular code structure (home screen, camera screen, OpenAI service, QR utils).
QR Analyzer Flutter App is a cross-platform application that:
- Lets users capture or select an image.
- Performs an initial AI-based analysis using OpenAI.
- Detects and decodes any QR codes in that image.
- Sends each decoded QR text to OpenAI for further analysis (e.g., discovering if it’s a URL, Wi-Fi credential, etc.).
The goal is to enhance user awareness of what a QR code really does—before the phone automatically acts on it. This includes highlighting potential security risks.
QR codes can hide malicious or suspicious operations (like leading users to dangerous links or prompting untrusted APK installations). By analyzing the QR code text through AI, users can understand the code’s intent, potential threats, and recommended precautions.
- Camera Capture: Opens a real-time camera interface to take a photo.
- Gallery Import: Allows picking images from the user’s device.
-
AI-Based Analysis:
- Initial Image Check: The app conceptually sends the image to OpenAI, asking for possible insights (e.g., “Is there an obvious security red flag?”).
- QR Text Analysis: For every detected QR code, the extracted text is again sent to OpenAI to decode its purpose and highlight potential security issues.
-
Modular Architecture:
- Camera Screen for real-time camera usage,
- Home Screen for the main menu,
- OpenAI Service handling all network calls,
- QR Utilities for decoding the image locally.
qr_analyzer/
├── lib/
│ ├── main.dart
│ ├── home_screen.dart
│ ├── camera_screen.dart
│ ├── openai_service.dart
│ └── qr_utils.dart
├── pubspec.yaml
└── README.md
-
main.dart
: The entry point of the Flutter app. -
home_screen.dart
: Contains UI logic for the main menu with two big buttons (“Open Camera” & “Choose from Gallery”). -
camera_screen.dart
: (Optional) Implements a real-time camera preview using thecamera
plugin. -
openai_service.dart
: Handles communication with the OpenAI API, sending prompts and parsing responses. -
qr_utils.dart
: Provides methods to decode the first (or multiple) QR codes from an image.
-
Open the App
- Displays two main actions: “Open Camera” or “Choose from Gallery.”
-
Capture or Select an Image
- Camera: Requests camera permission, launches a live camera preview. The user taps “Capture” to take a photo.
-
Gallery: Requests storage permission (if needed), opens a file picker or system gallery. The user chooses an image with
.jpg
,.jpeg
, or.png
extension.
-
Initial Analysis
- The app sends the raw image (as bytes or a base64-encoded string) to OpenAI with a custom prompt, asking for general feedback on the image.
- Note: Real GPT endpoints aren’t built for direct vision analysis—this is a conceptual approach. In production, a specialized service or an advanced model with vision support would be used.
-
QR Decoding
- Locally, the app decodes any QR code found in the image using
qr_code_tools
(or another plugin) to extract the text. - If multiple QRs are expected, you’d iterate or use a library that supports detecting more than one code.
- Locally, the app decodes any QR code found in the image using
-
QR Text Analysis
- Each decoded text is sent to the OpenAI API with a prompt like:
"This QR code text is:
<decodedText>
. What does it do? Is it a URL, Wi-Fi config, or something else? Any security risk or malicious intent?" - The AI responds with a concise explanation.
- Each decoded text is sent to the OpenAI API with a prompt like:
-
Display Results
- The final screen summarizes:
- Initial Image Observations: Potential existence of QR codes, guessed security level, etc.
- QR Code Findings: Each QR text, its interpretation, and risk assessment.
- The final screen summarizes:
-
Permissions
-
Android: Add
<uses-permission android:name="android.permission.CAMERA" />
and possibly<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
. -
iOS: Include
NSCameraUsageDescription
andNSPhotoLibraryUsageDescription
inInfo.plist
.
-
Android: Add
-
OpenAI API Key
- Hardcoded in
openai_service.dart
for demonstration. - Consider a secure approach (env variables, backend proxy) in production.
- Hardcoded in
-
Library Compatibility
-
camera
plugin: Great for Android/iOS, limited for web. -
qr_code_tools
: Usually for native. For web support, you might adapt or switch to a different library.
-
-
Multi-QR Detection
-
qr_code_tools
decodes the first discovered code. For multiple codes, you might rely on google_ml_kit or a specialized solution.
-
-
AI Limitations
- Standard GPT-3.5/4 endpoints can’t truly see images—so the “initial image analysis” is hypothetical unless you have a vision-enabled model.
- The text-based analysis of the decoded QR is accurate for explaining the data (URL, Wi-Fi), but you still need classic security checks (domain reputation, SSL checks, etc.) for robust real-world protection.
- Fork the repo, create a feature branch, and open a Pull Request describing changes.
- For major additions or refactoring, please open an issue first to align on design.
- Bug reports & feedback are always welcome—please use GitHub Issues.
- Domain Reputation: If the QR is a URL, perform a domain check to see if it’s known malicious.
- Multiple QRs: Enhance the logic to detect and display multiple QRs.
- Offline Heuristics: If offline, attempt local risk heuristics (like scanning for suspicious TLDs or known bad links).
- UI Polish: Add animations, better error handling, and richer results pages.
-
Can it run on web?
- Basic functionality might require alternative libraries for camera/gallery. The current approach is mostly for mobile platforms.
-
Is the AI-based image analysis real?
- GPT-3.5/4 cannot interpret raw images in standard endpoints. This project demonstrates a conceptual design. If you have access to GPT-4 with vision or a specialized image analysis model, you could integrate it more effectively.
-
What if I don’t want to send the image to OpenAI?
- You can disable the “initial analysis” step and only decode the QR locally—then just send the decoded text to OpenAI for interpretation.
This project is released under the MIT License. Feel free to modify and distribute, but please include the original license file and attribution.