This tutorial demonstrates how to use the camera plugin in combination with Firebase’s vision library to read any type of barcode. The example below is demonstrated using the Android emulator with the virtual scene option selected as the camera emulator.
Using the Android Emulator Virtual Scene
For those who do not wish to use the virtual scene, as shown above, please skip this section. Otherwise, start by creating an Android Emulator and select the virtual scene option for the camera of your choice.
Download any barcode image you can find on Google image search, run the emulator and click the ellipsis menu as shown below.
Finally, under the Camera option on the left nav, set the Wall image to point to this barcode file.
I’m not going to detail the steps to create a Flutter project. Instead, I will assume you already have your project ready and running. However, you will need to set up a Firebase Project and add it to your Flutter application project.
You may wonder why Firebase is used? Firebase has a service called ML Kit which we can pass an image to, and retrieve the values of any barcodes read. We can also rest assured that ML Kit has been trained to read all types of barcodes!
Setup Camera Preview
Luckily, there is a flutter plugin conveniently called Camera that allows us to have a camera preview along with the ability to acquire an image and pass it to Firebase ML Vision for barcode results.
Simply add the Camera plugin (with the current version) to your pubspec.yaml
We’ll take the camera code example straight from there as a basis to work with.
** For those using the Android virtual scene for the camera preview, you can hold Alt + WSAD keys to move around (the wall is in the room behind you) **
Read a Barcode
Now we have a camera preview to work with, we can start taking an image and passing it to Firebase’s vision detection API. As the camera plugin is still in preview, there is currently no way to stream the camera’s preview into ML Kit. Although there is now the functionality to acquire the byte buffer of the preview, the pixel data is not in the correct format that the VisionImage class expects. Converting this to the expected format is out of scope for this tutorial.
Instead, we will create a timer that runs every 3 seconds that takes an image, saves it, and have ML Kit load and read this.
First, let us setup the timer code.
Now that we have the callback function ticking every 3 seconds (safeguarded incase the barcode detection overruns by stopping it during the callback tick) let’s take an image!
Every 3 seconds the image will be overridden and passed to the ML Kit API as described below:
For the above code to compile, you will need to add the Firebase ML Vision plugin to the pubspec.yaml (along with path_provider, to get folder locations on the system).
And add the necessary includes at the top of the main.dart file
So… Now the app can take the photo, read all the barcodes detected in the image and store them in the member variable “_barcodeRead” string, all that is left is to display it!
Display the Barcodes
Add the Text element to the Stack inside of the “Build” method – we can wrap it in a container so that it can be anchored to the bottom of the screen.
Finally, we need to ‘redraw’ the widget whenever we update the barcode variable. To do this in Flutter, all we need to do is call “setState”.