Introduction

Since its first release, Processing has been known for its capacity in creating visualizations. It'south strength in manipulating pixels of images enables more than experimentation when external prototype sources, like cameras, are used.

While interesting and meaningful, using the built-in camera of the laptop or desktop estimator with Processing tin can be limited past the form factor and the input methods of the reckoner. The portability and expandability of Raspberry Pi unmarried-board computers opens upwards new frontiers for using camera as input for Processing sketches.

The combination of Processing, camera, and a couple of components connected to Pi's GPIO pins could be used to make some unique experiences while remaining affordable. Think of possibilities like:

  • Portable cameras with filters that are controlled past physical buttons and knobs
  • Portrait booths that generate artwork based on contempo snapshot
  • Computer Vision experiments
  • Timelapse rigs
  • and more than
Some examples of possibilities of using camera with Processing on the Pi

Of course this is only a short glimpse of what's possible. The cognition you gain in this tutorial should enable you to create your own projects using camera input in Processing on Raspberry Pi.

Let's take a look at what you volition need to accept in lodge to make the projects in this tutorial.

Required Materials

The master component that you would need for this tutorial is the camera attached to Raspberry Pi. Beneath is the full list of parts necessary for this tutorial:

  • a Raspberry Pi model 3+, 3 or two (those are recommended, it volition work the Pi Zero and older versions, admitting much more slowly) with Processing installed
  • Idiot box or any screen / monitor with HDMI input
  • Raspberry Pi camera module v1 or v2 (or a USB Webcam compatible with Raspberry Pi)

Optional:

  • i push push
  • Wires
A note nigh cameras

The official Raspberry Pi camera module is recommended because some inexpensive alternatives have been known to non work well with the V4L2 driver used by Processing. Also, if a USB webcam is used instead, there might exist slight performance issues.

Overview of using photographic camera with Processing on the Pi

Getting the video frames from the photographic camera in Processing has to be facilitated by an external library. The Processing'due south Video Library works well on Windows, Mac and some Linux distributions. Even so on the Pi its performance has been found to be lacking, this is why an culling library exists to provide the best possible feel on this platform.

This alternative library is named GL Video. Its name stems from information technology handling frames as OpenGL textures rather than arrays of pixel information, the onetime of which is more than efficient because it involves fewer operations on the CPU.

The GL Video library

The GL Video library works on Raspberry Pi computers running Raspbian Os. You will discover it already pre-installed if you are using the Pi image with Processing, alternatively you can install it through the Library Managing director within Processing IDE. It enables you to:

  • Capture frames from camera via the GLCapture class
  • Read frames from video files via GLMovie class

Both work roughly analog to the regular Video library does.

Before you use this library in your sketches, the photographic camera has to be continued to your Pi. With the camera connected / set up, we can beginning using GL Video class to work with the video stream from the photographic camera. Specifically, the GLCapture class within GL Video is the course that we'll be using to become the video stream from the camera.

Non using Processing's Raspberry Pi image?

If you are not using the pre-configured Raspbian image containing Processing, please run across this section for the necessary configuration changes for beingness able to utilise the photographic camera module.

Using GLCapture grade

The main purpose of the GLCapture class is to set up framerate and resolution of the camera, and to read image frames from the camera in form of textures. GLCapture class just works with P2D and P3D renderers and provides methods that are very similar to the Capture class within original Video Library.

If you've never worked with the Video Library, you lot are encouraged to take a look at an excellent tutorial past Daniel Shiffman that goes over the steps necessary to read a video stream from the camera in Processing: https://processing.org/tutorials/video/

The master methods that GLCapture provides, are:

  • list() - lists all cameras connected
  • start() - starts the video stream from photographic camera
  • end() - stops the video stream from photographic camera
  • available() - checks if a new frame is available for reading
  • read() - populates the object with the data from a video frame
Departure betwixt GLCapture and the original Capture form

Though the syntax and the purpose of the ii classes are very like, there are some subtle differences between the 2. For example, the captureEvent callback role that is in Capture course is not in GLCapture class. In GL Video, one instead calls the available() method inside draw to run into if there is a new frame waiting. Also, GL Video only works in P2D and P3D renderers.

Let'south dig into using the GLCapture class to start capturing the video stream! The procedure of using GLCapture class looks like this:

  • Make sure the sketch renderer is setup to exist P2D or P3D
  • Import the GL Video library that contains GLCapture class (import gohai.glvideo.*)
  • Create a new GLCapture object that will stream and shop the textures from the camera
  • Initialize the GLCapture object, specifying camera framerate, width and height of the desired video stream
  • First the stream via the start() method
  • Read the video stream when it is available
  • Display (or otherwise) utilise the video

Enough with the theory. Permit's effort this form out in practice! The post-obit case sketch comes with the GL Video library and will serve as a building block for our next steps. Running this instance will result in a window which reflects whatever the camera is capturing:

          import gohai.glvideo.*; GLCapture video;  void setup() {   size(320, 240, P2D); // Of import to annotation the renderer      // Go the list of cameras continued to the Pi   String[] devices = GLCapture.list();    println("Devices:");   printArray(devices);      // Get the resolutions and framerates supported by the first camera   if (0 < devices.length) {     String[] configs = GLCapture.configs(devices[0]);      println("Configs:");     printArray(configs);   }    // this will utilise the first recognized camera by default   video = new GLCapture(this);    // you could be more specific also, e.m.   //video = new GLCapture(this, devices[0]);   //video = new GLCapture(this, devices[0], 640, 480, 25);   //video = new GLCapture(this, devices[0], configs[0]);    video.showtime(); }  void depict() {   background(0);   // If the camera is sending new data, capture that data   if (video.available()) {     video.read();   }   // Copy pixels into a PImage object and show on the screen   paradigm(video, 0, 0, width, summit); }                  

There are a few important parts of this lawmaking which will save you a lot of headache later:

  • Listing connected cameras
  • Checking camera capabilities
  • Using framerates and resolutions supported past the cameras you're using

Listing the cameras connected to the Pi

Sometimes you might desire to take more single camera connected to the Pi. You could list all cameras and use specific camera connected to the Pi by using the GLCapture.list() method:

          String[] devices = GLCapture.listing();  println("Devices:"); printArray(devices); ... firstVideo = new GLCapture(this, devices[0]); secondVideo = new GLCapture(this, devices[i]);                  

To get an idea of the framerates and resolutions supported by the camera(s), you can apply GLCapture.configs() method.

Finding out photographic camera capabilities

For each photographic camera connected to the Pi, it is useful to know what possible resolutions and framerates they provide. Using the GLCapture.configs() method should render all available resolutions and framerates that the photographic camera supports:

          ... // For each photographic camera, get the configs before using the camera: Cord[] configs = GLCapture.configs(devices[0]);  println("Configs:"); printArray(configs); ...                  

Explicitly setting the desired framerate and resolution

After you discover out the camera'southward capabilities, yous can exist specific well-nigh the resolution and framerate that yous'd like to use with your photographic camera. For example, if you wanted to tell the photographic camera to apply resolution of 640 by 480 pixels, at 25 frames per 2d, you'd instantiate the GLCapture class similar this:

          ... video = new GLCapture(this, devices[0], 640, 480, 25); ...                  

At present that you know the nuts of using the GL Video grade and specifically, GLCapture class, allow'south brand some fun projects!

Mini projects using the camera

Using the knowledge virtually the GLCapture class, we will build the following three projects using the camera:

  • Using congenital-in image filters
  • Live histogram viewer
  • Using shaders for realtime visual effects

Permit'due south start with a unproblematic project that will requite you an thought of how to leverage the GLCapture grade and use it with built-in epitome operations in Processing.

Using built-in image filters with camera (threshold, blur, etc)

Processing comes with a range of built-in paradigm filters such equally:

  • Threshold
  • Blur
  • Invert
  • etc.

These filters can be applied to any PImage, including the GLCapture object which returns video data from camera.

Consider the following case that will turn a color image into a grayscale image:

          PImage img; img = loadImage("apples.jpg"); image(img, 0, 0); filter(Gray);                  

Allow's take this uncomplicated case and utilize it to a alive video feed. We'd but demand to supersede the static image loaded from the hard drive with the epitome that comes from the camera stream. For example:

          // Get video data stream if (video.bachelor()) {   video.read(); } // Display the video from photographic camera epitome(video, 0, 0, width, height); // Apply a threshold filter with parameter level 0.v filter(GRAY);                  

Nice and piece of cake! Of course we're not limited to only grayscale filter. Let's utilise another filter, a Threshold filter that produces the following issue:

Here'southward the full sketch for applying the threshold effect:

          import gohai.glvideo.*; GLCapture video;  void setup() {   size(640, 480, P2D);    // this will use the first recognized camera past default   video = new GLCapture(this);   video.starting time(); }  void draw() {   groundwork(0);   if (video.available()) {     video.read();   }   image(video, 0, 0, width, pinnacle);   // Apply a threshold filter with parameter level 0.v   filter(THRESHOLD, 0.five); }                  

Don't end in that location. Play with the other filters and see which one yous like the most! At present that you're getting comfortable with using congenital-in filters, let's go along with a project that will accept reward of the GLCapture class and will apply pixel analysis operations of Processing.

Live Histogram Viewer

I of the born example sketches in Processing ("Topics > Image Processing > Histogram") features a "histogram" generated from the pixel information of a all the same image.

A histogram is the frequency distribution of the grayness levels with the number of pure black values displayed on the left and number of pure white values on the right.

What if we take that instance, but instead of still image use a live video stream to generate the histogram from the camera feed? Hither's an example video captured while running live histogram viewer:

The only improver comparing to the default all the same-epitome histogram sketch would exist to use the GLCapture class and to read the camera data into PImage object that will then exist analyzed to create the histogram:

          PImage img;  void setup() {   // setup the camera framerate and resolution   ... }  void depict() {   if (video.available()) {       video.read();     }   img = video;   image(video, 0, 0);      // Create histogram from the prototype on the screen (camera feed)   ... }                  

This fourth dimension, allow'southward request a specific resolution and framerate of the camera input to control performance of our sketch. Lower resolutions tin exist processed much faster than college resolutions. Decision-making the framerate can also impact perfromance of your sketch. For the histogram viewer, allow'south apply resolution of 640 past 480 pixels, and framerate of 24 frames per second by using the GLCapture instantiation parameters:

          ... void setup() {   ...   video = new GLCapture(this, devices[0], 640, 480, 24);   video.start(); } ...                  

Below is the total sketch for the alive histogram viewer:

          /**  * Histogram Viewer derived from the "Histogram" congenital-in instance sketch.   *   * Calculates the histogram based on the image from the photographic camera feed.   */  import gohai.glvideo.*; GLCapture video;  void setup() {   size(640, 480, P2D);      String[] devices = GLCapture.list();   println("Devices:");   printArray(devices);      // Utilise camera resolution of 640x480 pixels at 24 frames per 2nd   video = new GLCapture(this, devices[0], 640, 480, 24);   video.first(); }  void draw() {   background(0);   if (video.available()) {     video.read();   }      paradigm(video, 0, 0);   int[] hist = new int[256];    // Calculate the histogram   for (int i = 0; i < video.width; i++) {     for (int j = 0; j < video.height; j++) {       int brilliant = int(brightness(get(i, j)));       hist[bright]++;     }   }    // Find the largest value in the histogram   int histMax = max(hist);    stroke(255);   // Draw half of the histogram (skip every 2nd value)   for (int i = 0; i < video.width; i += ii) {     // Map i (from 0..img.width) to a location in the histogram (0..255)     int which = int(map(i, 0, video.width, 0, 255));     // Convert the histogram value to a location between      // the lesser and the top of the film     int y = int(map(hist[which], 0, histMax, video.height, 0));     line(i, video.height, i, y);   } }                  

Notice how we used video.width and video.peak to find out the dimensions of the video. The GLCapture class inherits these and other methods from the PImage class (meet reference for other methods available to PImage and thus, to each case of GLCapture).

By existence able to analyze and operate on pixel data from the camera, you lot can come up with some real-time or near real-time visuals that can be interesting and fun to experiment with.

What if you wanted to accelerate the speed of various epitome furnishings and peradventure push button the boundaries of performance on the Pi? Enter Shaders!

Using GLSL Shaders for improved performance

Doing image processing pixel-by-pixel is a computationally expensive process. The CPU on the Pi is relatively slow and the corporeality of RAM is low, so performance suffers when complex operations or analysis is performed on the prototype data.

There is a fashion to improve performance of image operations by using the Graphics Processing Unit (GPU) that's designed to accelerate graphics processing. The Pi GPU (even on Pi Goose egg) is capable of processing millions of pixels simultaneously and that can outcome in tangible functioning increase. For example, check out this video of hardware accelerated effects in Processing:

Considering the information we go from GL Video library is essentially regular pixel information, we tin can do whatever nosotros want with those pixels afterwards putting them onto a PImage. For example, we can use shaders to take reward of using hardware acceleration to offload image processing from the relatively slow CPU and onto the graphics processing unit (GPU) of the Raspberry Pi.

Shader is a program that runs on the GPU and generates the visual output on the screen. Processing supports shaders written in GLSL (openGL Shading Language) language.

You might have seen shaders in utilise on websites or in video games. They are widely supported on any platform that has a GPU, including Raspberry Pi.

Shaders in Processing

There are two types of shaders that could be used in Processing:

  • Vertex shaders that specify boundaries of graphics on the screen
  • Fragment shaders that specify what is displayed inside those boundaries
Learning about shaders in Processing

The theory behind shaders is largely outside of the scope of this tutorial, only there is a detailed article about both types of shaders and how they can be used in Processing: https://processing.org/tutorials/pshader/

In this tutorial we will merely explore using fragment shaders which fill up the screen with colors co-ordinate to the shader code. For the purpose of this tutorial, nosotros will take existing open source fragment shaders from various places online and use them with the video from the camera.

Allow'due south showtime by understanding how to create a shader file and use it within the Processing sketch.

Creating and using a shader file

At that place are four steps to create and apply a shader file in your Processing sketch:

  • Declaring a shader in the sketch using PShader type
  • Creating a shader file in the "data" folder of the sketch
  • Loading the shader file via loadShader() method inside the sketch
  • Activating the shader via shader() method

Allow'southward become over these steps one past one to create and employ a elementary shader that volition be applied to an paradigm generated in Processing.

Declaring the shader in your sketch is washed by using the congenital-in PShader type. Subsequently the shader is declared, we demand to create a split up file containing the shader code (a special file with glsl extension that resides in data folder of the current sketch), load that file with loadShader(fileName), and apply the shader to whatever is existence drawn within Processing. Here's an example of structure for a sketch that'southward using a shader:

          PShader shader;  void setup() {   size(600, 100, P2D);   // load the file containing shader code, has to exist within "data" folder   shader = loadShader("shader.glsl");  }  void draw() {   // the drawing lawmaking will get hither   shader(shader); // apply the shader over whatever is drawn }                  

Delight create a sketch with this case code and salve information technology so that you know the location of this sketch.

When you take a reference to the shader.glsl file within the sketch, you will demand to create that file (it can be empty for now) and identify information technology within the information folder of the current sketch.

Creating and Editing shader files

Currently, in order to create or change a shader file, y'all need to utilize an external editor (y'all can use i of Raspbian's default text editors found in Main Menu > Accessories > Text Editor). In the future, editing of GLSL shader files will be possible within the Processing IDE which will meliorate this workflow.

Now that the shader file is created, let's put in some code in it. We will utilise existing shader lawmaking found online that turns a color image into grayscale paradigm. Copy and paste the following code, relieve the file, and let's go over it to sympathise what's happening:

"shader.glsl" listing:

          // Shader that turns color paradigm into grayscale #ascertain PROCESSING_TEXTURE_SHADER  uniform sampler2D texture; varying vec4 vertTexCoord;  void main () {   vec4 normalColor = texture2D(texture, vertTexCoord.xy);   float gray = 0.299*normalColor.r + 0.587*normalColor.grand + 0.114*normalColor.b;   gl_FragColor = vec4(gray, gray, gray, normalColor.a); }                  

Even though this shader is very small (merely a few lines), it contains many important parts: definitions, variables, calculations, assignments and functions.

When it comes to Processing, there are six types of shaders that can be explicitly defined by using #define statement:

  • #define PROCESSING_POINT_SHADER
  • #define PROCESSING_LINE_SHADER
  • #define PROCESSING_COLOR_SHADER
  • #define PROCESSING_LIGHT_SHADER
  • #define PROCESSING_TEXTURE_SHADER
  • #ascertain PROCESSING_TEXLIGHT_SHADER

We will use #define PROCESSING_TEXTURE_SHADER type exclusively because our shaders volition be texture shaders (equally opposed to light, color and others).

When writing fragment shaders, some variables are essential for every shader:

  • uniform sampler2D texture
  • varying vec4 vertTexCoord
  • gl_FragColor within the main role

The void master () function besides is necessary for every shader. Within this role, the calculations on pixel values will happen and be returned as gl_FragColor variable.

The compatible sampler2D texture and varying vec4 vertTexCoord accept special pregnant and then let's look at them closely:

uniform sampler2D texture is essentially an image(array of pixels) that will be passed from the Processing sketch to the shader. This is what shader receives and will operate on.

varying vec4 vertTexCoord is a prepare of coordinates for the boundaries of the resulting prototype. Even though these boundaries tin exist moved to be wherever you desire, we volition non touch them, which results in the image taking the whole area of the sketch.

Now, let's talk virtually the calculations taking place in this shader. Since nosotros are turning a color image into grayscale, we first need to know RGB values for every pixel, and then nosotros sum upward those values in some manner to get some sort of average value.

          // This gives the shader every pixel of the image(texture) to operate upon vec4 normalColor = texture2D(texture, vertTexCoord.xy);  // Calculate grayscale values using luminance correction (see http://www.tannerhelland.com/3643/grayscale-paradigm-algorithm-vb6/ for more examples) float gray = 0.299*normalColor.r + 0.587*normalColor.g + 0.114*normalColor.b;                  

This looks very unlike from regular Processing operations where you take to loop over arrays of pixels, doesn't it? It'south because when working with shaders, the main function is ran on every pixel simultaneously(in parallel) and yous cannot loop over pixel values in conventional way.

Since the sketch currently doesn't contain any drawing functions so far, nosotros won't have anything to return and modify. Let's add together a few colorful rectangles to the screen and then apply the shader to encounter how it will affect the image. Permit'southward add this code within the describe() function before the filter() function is called:

          void draw() {   groundwork(255);   make full(255, 0, 0);    rect(0, 0, 200, tiptop); // add a red rectangle   fill up(0, 255, 0);   rect(200, 0, 200, tiptop); // add a greenish rectangle   fill up(0, 0, 255);   rect(400, 0, 200, elevation); // add a blue rectangle   filter(shader); }                  

Hither'south the result of this updated sketch running with and without the shader beingness practical:

Grayscale effect using a shader

You can try modifying the values within the calculation part of the shader to see how each color is beingness converted to grayscale:

          // Play with these numbers and observe the grayscale changes bladder gray = 0.299*normalColor.r + 0.587*normalColor.g + 0.114*normalColor.b;                  

You might remember that converting a colour epitome to grayscale is no big deal since you can do the aforementioned with Processing'south built in Gray() filter. The most compelling reason to use shaders is that they would exist an society of magnitude faster than CPU intensive filter operations. This is especially true when it comes to animation or video.

Let's take the same shader and employ it to a live camera feed using the GL Video course!

Using a shader with camera feed

Since the shader can be applied to any epitome coming from Processing sketch, we can put together a sketch that does the following:

  • Captures the video stream from the camera
  • Draws the video frames of the photographic camera onto the screen
  • Applies our grayscale shader and shows the modified video feed on the screen

The most important part of this process is to read the camera data, draw it onto a PImage object and apply the shader:

          ... // setup the sketch and the photographic camera ...  // Read camera data and use shader void draw() {   background(0);   if (video.available()) {     video.read();   }    image(video, 0, 0);     shader(grayscaleFilter); }                  

Please see the video of the filter applied onto the camera stream in real time:

The complete sketch for this effect is beneath:

grayscale.pde

          import gohai.glvideo.*; GLCapture video;  // Define the shader PShader grayscaleFilter;  void setup() {   size(640, 480, P2D);    Cord[] devices = GLCapture.list();   println("Devices:");   printArray(devices);    // Use camera resolution of 640x480 pixels at 24 frames per second   video = new GLCapture(this, devices[0], 640, 480, 24);   video.offset();    // Load the shader   grayscaleFilter = loadShader("shader.glsl"); }  void draw() {   background(0);   if (video.bachelor()) {     video.read();   }    image(video, 0, 0);        // Apply the shader   shader(grayscaleFilter); }                  

Contents of the shader file: shader.glsl

          // Shader that turns color image into grayscale #ascertain PROCESSING_TEXTURE_SHADER  uniform sampler2D texture; varying vec4 vertTexCoord;  void main () {   vec4 normalColor = texture2D(texture, vertTexCoord.xy);   float gray = 0.299*normalColor.r + 0.587*normalColor.k + 0.114*normalColor.b;   gl_FragColor = vec4(greyness, gray, grey, normalColor.a); }                  

This is a get-go! Using GL Video and shaders becomes a powerful combination to create compelling real-fourth dimension visualizations. Now we can explore more advanced topics like passing parameters from the sketch to the shader.

Passing parameters to the shader

What if you wanted to change values within the shader in real fourth dimension, and could pass those values from the sketch somehow? The PShader class has a single method for that, the prepare() method.

Using this method you can enquire the sketch to update some variables inside the shader in real time. For example, let'south say our shader has the following variable that acts as an assortment of two values:

          ... compatible vec2 pixels; ...                  

At present, using the prepare() method, you could update the pixels variable within the shader by specifying what values you'd like to pass into it:

          // Get-go parameter specifies the name of the variable, followed by new values issue.set up("pixels", 0.one * mouseX, 0.1 * mouseY);                  

Let's have a look at how this can be used in practice:

Here'due south the shader lawmaking:

pixelate.glsl

          #ifdef GL_ES precision mediump bladder; precision mediump int;