MAX_FRAMES_PER_SEC and other rendering related findings

What started out as a quick in and out mission to see if there was any way of having transparency in bitmap files turned out to not only tell me if it is possible with transparency, (it is not, dLcdDrawBitmap always set pixels to on or off) it also got me looking into the swapping of display buffer to be displayed on screen and that led me to this define:

#define   MAX_FRAMES_PER_SEC    10                    //!< Max frames per second update in display

For a while I’ve been wondering if it is possible to wait for vertical blank somehow, to avoid tearing, and that is why I found that define interesting. So it seems like there is a pretty low framerate cap in the Mindstorms VM. When following the define a bit more I eventually found my way into a function called dLcdExec that looks like this:

void      dLcdExec(LCD *pDisp)
  if (memcmp((const void*)pDisp,(const void*)&VMInstance.LcdBuffer,sizeof(LCD)) != 0)

    VMInstance.LcdUpdated  =  1;

I am not familiar with XPutImage or XFlush but it seems to point to that the Mindstorms VM is running X11 for rendering to screen. Since I’m not very good with Linux I don’t know if that means any app could be written as long as it use X11, but permaybe…

Also, another thing I started thinking about that I find weird is that the screen width is 178 pixels wide. The LCD buffer is defined as:

#define   LCD_BUFFER_SIZE (((LCD_WIDTH + 7) / 8) * LCD_HEIGHT)

And the width and height are defined (but hidden behind another define) as:

#define   vmLCD_WIDTH                   178                           //!< LCD horizontal pixels
#define   vmLCD_HEIGHT                  128                           //!< LCD vertical pixels

So that means the buffer contains (178+7/8)*8 pixels for each line. That means 184 pixels for each line. Which is 6 pixels more than 178. I wonder what thos 6 pixels / bits mean?

Oh, and one final note. dLcdDrawBitmap dLcdDrawPixel, the same as UI_DRAW PIXEL use, so I could make my own sprite renderer in LMS code that sorts out transparency.

Ok, so there is one more thing! I also found the UI_DRAW subcommand STORE and RESTORE that seem to take a snapshot of the buffer and restore that snapshot to the buffer, so if one would build for example a game with static backgrounds the background could be rendered once, with slower operations like say UI_DRAW BMPFILE onto the screen buffer and then a snapshot could be taken, and for each iteration in the game loop the snapshot containing the background is restored and then each transparent sprite is rendered on top of that. The STORE and RESTORE subcommands are simply memcpy’s, so they should be tons faster than whatever I could cook up in LMS code.

Going back to MAX_FRAMES_PER_SEC, I may have been wrong in the beginning of this post. When looking at what the define is used for it actually seems to automatically flip the render buffer to be displayed at this interval, it doesn’t necessarily seem to enforce a cap at this framerate. I’ll have to investigate more into this to be sure though..

3 thoughts on “MAX_FRAMES_PER_SEC and other rendering related findings

  1. David Lechner

    I think MAX_FRAMES_PER_SEC is not defined in the version of LMS running on the EV3. LMS can also be compiled for x86, which I assume that LEGO used for developing the various UI screens on a desktop instead of on the EV3. In the frame buffer driver, it uses deferred i/o to limit the LCD to updating 20 times per second (HZ / 20). MAX_FRAMES_PER_SEC I assume is to simulate this when running on the desktop. The EV3 is definitely not running X11. I think those functions are only called in the desktop simulation version.

    The extra 6 pixels are just not displayed. The LCD controller chip has to have each line come out to an integer number of bytes. Trying to handle part of a byte that is on one line and the other part on the next line is more complicated than it needs to be, so they just waste a little space to make it come out even.

    1. Magnus Post author

      Hi David,

      Thanks for the info. Reading the frame buffer driver and the presentation by Texas Instruments was super interesting! I’ve never been this close the hardware before. I looked into the EV3 source that is GitHub and it seems to indicate that MAX_FRAMES_PER_SEC is always defined. (lms2012.h in ev3sources master) I can’t find any preprocessor conditions that can make the MAX_FRAMES_PER_SECOND macro not exist. (I don’t have a project loaded so I can’t easily follow all the preprocessor directives though :) )

      I am curious though why the initdevdata-struct in the st7586fb driver state that there are 2 bits per pixel (Frame buffer driver, line 105) but the actual writing of pixels and allocating frame buffer in the VM is only using 1 bit per pixel. I had another look at d_lcd.c and found what you said about X11 only being used on desktop builds. I missed the “#ifndef Linux_X86” directive at the top and the fact that there are two implementation of dLcdExec. :) That just raised another question though. dLcdExec for the EV3 Brick seems to convert 1 bit per pixel frame buffer to a 2 or 3 bits per pixel frame buffer. The PixelTab table seems to be a conversion table to convert 3 pixels that are 1 bit per pixel (so 3 bits at once) into a 8 bit per 3 pixel field.

      I wonder if that means the screen can actually do gradients? 8 shades of gray for every 2 out of 3 columns and 4 shades for every third column.

  2. David Lechner

    I was just speculating about MAX_FRAMES_PER_SEC, so I am sure you are correct. As you saw, some pixels are actually 3 bits and some are 2. Line 105 is not actually used by anything in LMS – I am guessing that Matt Porter just picked that number because it had to be either 2 or 3 and 2 was safer.

    According to the datasheet, the controller supports grayscale. However, I have tried turning on gray scale mode and it does not appear to work. From what I have read, to save money, LCD controllers like this don’t always get all the features included when they are manufactured.

Leave a Reply