strcpy, strcat and sprintf vs strncpy, strncat and snprintf

It is quite often we see that the usage of strcpy, strcat and sprintf is challenged in most of the static code analyzers such as Coverity®. Instead these tools suggest to use strncpy, strncat and snprintf respectively.

First we need to understand that replacing them is not the panacea for all such occurrences. Sometimes software developers think that this would curtain the possible defects created by the programmer and would lead to several performance issues in the longer run. As an example, if we perform a snprintf over sprintf, we might expect to limit the program to correctly null-pointer the memory allocation without exceeding the limit. This would probably avoid the program being crashed when there is a long formation of string directed in to the program. Think of a situation that the input to the snprintf exceeds the limit allocated. But the programmer will never know that there were such an occurrence and program will produce inaccurate behaviors such as inability in retrieving the original input.

But it does not mean that using of snprintf over sprintf gives no values to your effort. There are some good reasons why static code analyzers suggest this. You can find some virtues of this here.

I am listing some of the moot but important facts about the usage of strncpy, strncat and snprintf as follows.

 

1. No ‘-1’ is needed with the snprintf size argument.

It is evident that the most of the programmers tend to limit memory one less than the actual memory of character array to preserve the allocation of null character (‘\0’).

char msg[100] = {0};

snprintf(msg, sizeof(msg)-1, “‘-1’ for the usage of ‘-1′”);

But for snprintf, this gives no benefits as the resulting string will always be one shorter than the allocation(n). It means snprintf will discard all the characters beyond (n-1) and preserve the n’th location for the null character. Instead of the above, the following would be sufficient.

snprintf(msg, sizeof(msg), “I give you now ‘+1′”);

 

2. For concatenation, sprintf and snprintf behaves differently.

Due to some reasons, you would observe the following discrepancies in concatenation between sprintf and snprintf.

char buf[20] = “”;

sprintf(buf, “%s.%s”, buf, “this is “);

sprintf(buf, “%s.%s”, buf, “a concat”);

The resulting buffer will be “this is a concat”. You would expect a same semantic formation for the snprintf too as follows.

snprintf(buf, sizeof(buf), “%s.%s”, buf, “this is “);

snprintf(buf, sizeof(buf), “%s.%s”, buf, “a replacement”);

This time the resulting beffer will only contain “a replacement”. This is because the buffer is overwritten when we try concatenation in the above format. The below is the correct usage.

snprintf(buf, sizeof(buf), “%s”, “this is “);

snprintf(buf+strlen(buf), sizeof(buf)-strlen(buf), “%s”, ” correct”);

Now “this is correct”.

NOTE: You will find this useful in differentiating between ‘sizeof’ and ‘strlen’.

In both sprintf and snprintf, buffer is considered as “Pointer to a buffer where the resulting C-string is stored”. This means that you can use the address of an element of the buffer to append a character array. Thus ‘&buf[strlen(buf)]’ in place of ‘buf+strlen(buf)’ above will also give the same result.

NOTE: But don’t try ‘&buf+strlen(buf)’ or &(buf+strnlen(buf)’ as these would give you incorrect results or compilation error respectively.

 

3. strncat in succession

char buf[BUF_SIZE];

strncpy(buf, “some string”, sizeof(buf)-1);

buf[sizeof(buf)-1] = ‘\0’;

strncat(buf, “, another string”, sizeof(buf) – strlen(buf) – 1);

Again someone can argue that strncpy of “some string” would also copy the null terminator into the buffer (buf) and hence manual null terminating is meaningless. But what if the string length of “some string” is greater than BUF_SIZE?

Needless to say, we can straight away use the strncat without using strncpy at the beginning as follows.

buf[sizeof(buf)-1] = ‘\0’;

strncat(buf, “some string”, sizeof(buf) – strlen(buf) – 1);

strncat(buf, “, another string”, sizeof(buf) – strlen(buf) – 1);

 

4. strncat always null-terminated but strncpy doesn’t

char dest[64];

for strncat, dest should have a ‘\0’ at somewhere in the array. dest[0] = ‘\0’; would be enough.

for strncpy,

dest[min(strlen(src), sizeof(dest)-1)] = ‘\0’;

is needed due to the following reasons.
i. to accomodate the src string which is sized equal or greater than 63 where its ‘\0’ is not copied when strncpy is performed.

dest[sizeof(dest)-1] = ‘\0’;

ii. if you perform a substring copy like the following,

strncpy(dest, src, 5);

and if the string length of src is greater than or equal to 5 characters (given that sizeof(dest) > 5), then the copy will not include ‘\0’ in its copy. For that, you will have to manually add the null character as follows.

dest[5] = ‘\0’;

 

What if, “strlen(src) < 5 < sizeof(dest)”?

The following example is directly taken from here for an illustration.

#include <iostream>
#include <cstring>

int main()
{
const char* src = “hi”;
char dest[6] = {‘a’, ‘b’, ‘c’, ‘d’, ‘e’, ‘f’};
std::strncpy(dest, src, 5);

std::cout << “The contents of dest are: “;
for (char c : dest) {
if (c) {
std::cout << c << ‘ ‘;
} else {
std::cout << “\\0” << ‘ ‘;
}
}
std::cout << ‘\n’;
std::cout << dest << std::endl;
}

The resulting dest will be: h i \0 \0 \0 f

This shows that the copy will only ensure the ‘\0’ is added only upto the size specified after copying the src.

You can do the following for a defined string.

char* c = “hi”;

instead of,

strncpy(buf, c, 2);

buf[2] = ‘\0’;

The below would be sufficient as this will copy ‘\0’ of “hi” as well.

strncpy(buf, c, 3);

 

5. strncat as a better alternative to strncpy.

strncat and strncpy need direct or indirect way of making room for the null character at the end. This means that the above point 1. is not applicable for strncat and strncpy where you cannot use the full length of a buffer. And it is necessary to use ‘size-1’ format when you specify the full length.

strncpy(dest, src, sizeof(dest)-1);

Someone would suggest you to do the following in case of strncpy.

char dest[100];

strncpy(dest, src, sizeof(dest));

dest[sizeof(dest)-1] = ‘\0’;

One would argue that if there is a defined character array in ‘src’, it should contain the ‘\0’ and adding null character at the end is not necessary. What if the length of ‘src’ is higher than equal to the maximum length of ‘dest’? The ‘\0’ will be beyond the sizeof(dest) and something innocent like ‘printf(“%s\n”, dest);’ would also crash.

But strncat comes as a handy alternative to strncpy as follows.

*dest = ‘\0’;

strncat(dest, src, sizeof(dest)-1);

 

Please feel free to comment below, if you found anything unclear, wrong or buggy. Have a nice day!

Building OpenCV & OpenCV Extra Modules For Android From Source

Background

In my last blog, I described how to set up Android Studio (AS) to work with OpenCV4Android. Initially Android supported ADT plugin in Eclipse. If you are an Eclipse user in Android development, build OpenCV java wrapper for java or simply build OpenCV for C++ API following the respective blogs. You will need standalone SDK for Android in this case. However recently Android announced that their support for ADT plugin in Eclipse has ended and requested all the developments in Eclipse to shift to Android Studio environment.

OpenCV4Android has two forms as follows.

  1. Using OpenCV’s native interface. This would be a little hectic task for a fresh OpenCV starter. You will have to learn NDK stuffs but promising when it comes to performance optimization. Isn’t it nice to code in C++ and make work on a mobile, just forgetting the interconnections?
  2. Using Java-API. This is easier than the native interface. You just need to import OpenCV classes and move on. Not to forget, computations are performed at native level and hence their is a cost in comparably high JNI calls. On the other hand, Java has its own advantage and so does this method.

Both the methods need OpenCV4Android distribution either pre-built one or building from the source. OpenCV4Android pre-built libraries has only modules in OpenCV main repository and if you need more than that, you need a pure building from the source. Though you can build java wrapper for OpenCV using this, the ‘opencv-xxx.jar’ file does not contain android module. Even after you include android in the jar file, you will have to manually configure the dependency setup for AS. Therefore it is again advised to go for a building from the source for Android. Let’s start!

Pre-requisites

  1. Download OpenCV and unzip (say at <opencv-source>)
  2. Download OpenCV extra modules and unzip (say at <opencv-contrib-source>)
  3. Download Android NDK (say at <ndk-dir>)
  4. Download CMake (to say <cmake-dir>) and MinGW (to say <mingw-dir>)
  5. Install Android Studio

 

Configuration

Go to ‘<opencv-source>/platforms’ and create a folder named ‘android_arm’

Run ‘<cmake-dir>/bin/cmake-gui’ set paths as follows.

source and release dir.PNG

Set paths of source and building directories

Press ‘Add Entry’ button in the cmake-gui, add ‘ANDROID_NDK’ as a cmake ‘path’ variable and provide the value as <ndk-dir> (in my case path is ‘C:\android-ndk-r10e’).
Add ‘ANDROID_NDK_HOST_X64’ too and check that.

Set ‘CMAKE_TOOLCHAIN_FILE’ as ‘<opencv-source>/platforms/android/android.toolchain.cmake’.

Press ‘Configure’. Choose ‘MinGW Makefile’ as the compiler and press ‘Finish’. See  whether above configurations are properly set as in the below figures. Keep other settings as it is.

Android settings cmake.PNG

Android Variables in CMake

cmake settings cmake toolchain.PNG

CMake Toolchain Variables in CMake

Building From Source

Go to <mingw-dir>/msys/1.0 and run ‘msys’ bash file.

Navigate to ‘<opencv-source>/platforms/android_arm’ path and run ‘mingw32-make’ command. After completing this, run ‘mingw32-make install’.

java_for_android_3.PNG

OpenCV for Android is built using MinGW

Now you have successfully built OpenCV4Android in ‘<opencv-source>/platforms/android_arm/install/sdk’

Then, import built modules as follows (detailed step-by-step explanation is given in the previous blog)

  1. Launch Android Studio and create a new project selecting File -> New -> New Project…
  2. Go to File -> New -> Import Module… and provide  ‘<opencv-source>/platforms/android_arm/install/sdk/java’. Click ‘Next’ and ‘Finish’.
  3. If there is a Gradle synchronizing error, change the ‘compileSdkVersion’ and others as follows in build.gradle file in openCVLibraryXXX (imported module) folder.
  4. Then add the imported library as a dependency to the ‘app’ module in File->’Project Structure’.
  5. Copy libraries of OpenCV from ‘<opencv-source>/platforms/android_arm/install/sdk/native/libs’ into a newly created folder (jniLibs) in side the ‘app\src\main’ folder.

Additionally; in AS; after you import the module you might experience a gradle build error as follows.

unspecified on project app resolves to an APK archive which is not supported as a compilation dependency.

Therefore modify lines in build.gradle file of imported module as follows.

  1. Replace ‘apply plugin: com.android.application’ as, ‘apply plugin: com.android.library’
  2. Remove the line ‘applicationId “org.opencv”‘

 

OpenCV Extra Modules for Android Studio

Below are the steps to build OpenCV’s extra modules for Android Studio.

Go to each of the extra modules you need (say ‘<opencv-contrib-source>/modules/extra_1’) and open ‘CMakeLists.txt’ file. Find the line containing ‘ocv_define_module’ option.

ocv_define_module(extra_1 opencv some other modules)

Add ‘java’ modules at the end of the parenthesis.

ocv_define_module(extra_1 opencv some other modules java)

If there is no ‘WRAP’ found in the list, add ‘WRAP’ before ‘java’

ocv_define_module(extra_1 opencv some other modules WRAP java)

Open ‘cmake-gui’ and select ‘OPENCV_EXTRA_MODULES_PATH’ under ‘OPENCV’ category and insert ‘<opencv-contrib-source>/modules’

And keep the above ‘cmake-gui’ settings as it is. At run, extra module ‘opencv_bioinspired’ might fail and therefore uncheck that before the configuration in cmake-gui.

Again follow the steps given under sub topic ‘Building From Source’ explained earlier in this blog.

Final Word

Even after all, in your OpenCV 3+; though all the settings, modules and dependencies are set and imported; you might feel something missing after seeing applications launch on the Android device stops unexpectedly as follows.

Package not found

OpenCV Manager package was not found! Try to install it?

This is because initial OpenCV module call points to ‘opencv_java’ instead of ‘opencv_java3’.

To change this, go to openCVLibraryXXX (imported module) -> src -> main -> java -> org -> opencv -> android -> StaticHelper.java.

Find the line,

result &= loadLibrary(“opencv_java”);

and change this as,

result &= loadLibrary(“opencv_java3”);

 

This is the end of this blog and hope you found this useful. Why not you comment below if you think some necessary parts are missing or on something I have done wrong. Good day!

 

OpenCV For Mobile Devices Using Android Studio

Background

OpenCV was known as a computer vision library for 15 years. Ever increasing open source community have extensively made use of OpenCV without any hesitation and contribute back to it’s growth as well. Ever since its inception OpenCV has extended their supports to wide range of platforms and the recent adoption is Android mobile platforms.

There are two main IDEs supported at the moment for Android development of OpenCV.

  1. Eclipse with CDT plugin, Android SDK, NDK and ADT pluginThere are plethora of tutorial on Android Development using Eclipse including official pages of OpenCV. This and this would make you through in development of OpenCV using Eclipse. OpenCV java wrapping can also be built for Eclipse.
  2. Second option is using Android Studio (AS). AS gets better with time as lots of community effort is put forward though initially it had significant bottlenecks.

Which is better? Though it is hard to answer, recently Android community have announced that their support for Eclipse is ended and now they are focusing on AS. Detailed answers can be found in this.

The purpose of this blog is to make Android developers’ life easy in adopting with the development of OpenCV using AS. Let’s get start.

Pre-requisites

Download OpenCV and unzip (at say <opencv-source>), if you want to code in C++ and use native interface for Android. In this case, you can install OpenCV for Windows using this. You can still build OpenCV4Android from source. You can simply ignore this, if you prefer pre-built OpenCV4Android libraries instead.

  1. Download and unzip OpenCV4Android (at say <opencv4android>)
  2. Download and install Android Studio

android_studio_splashscreen-780x585

 

Run AS

  1. Launch it and follow the instructions to build a new project as given in this.
  2. You can have a testing device (such as an Android mobile device) or an emulator in order to test your application.To install HAXM or it’s latest, go to ‘Tools’ -> ‘Android’ -> ‘SDK Manager’. In the opening window, select ‘Appearance & Behaviour’ -> ‘System Settings’ -> ‘Android SDK’ and select ‘SDK Tools’ tab. Check the ‘Intel x86 Emulator Accelerator …’ radio button and click ‘Apply’ button to install HAXM for Android Studio. Then run ‘intelhaxm-android.exe’ file, following the steps given in this.
  3. If you are having “haxm is not working and emulator runs in emulation mode” error when you run the application on emulator, the HAXM should be re-installed. This is because AVD cannot have higher memory limit than the HAXM. You can adjust memory limit of HAXM by re-installing HAXM using the instructions given in 2).
    Or otherwise, if you need to change AVD memory, follow the above link’s 3rd best answer    in ‘Tools’ -> ‘Android’ -> ‘AVD Manager’.

It is advised to use an Android mobile device for testing instead of the emulator as it eats up lots of memory in the system.

SDK manager

Install SDK tools using SDK Manager

OpenCV with AS

After having experimented a sample code above, lets take an OpenCV example. Sample projects can be found in ‘<opencv4android>\samples’ folder.

  1. Copy the ‘res’ folder of a sample project (say in ‘<opencv4android>\OpenCV-android-sdk\samples\color-blob-detection’) and replace it in the project you just created above (say ‘opencvsample’) in the workspace location (‘<AndroidStudioPorjects>\opencvsample\app\src\main’)
    After replacing ‘res’ folder, make sure you retain ‘mipmap*’ folders and ‘colors’, ‘dimens’, and ‘styles’ files in ‘values’ folder.
  2. Copy ‘src’ folder in ‘<opencv4android>\samples\color-blob-detection’ and paste inside the ‘<AndroidStudioPorjects>\opencvsample\app\src\main’ after removing ‘java’ folder resides in this location. Rename the ‘src’ folder as ‘java’.

Now you will have to import modules and libraries for this project as given below.

Import OpenCV Modules and Libraries

Import opencv.java modules by File->New->Import Modules and provide  ‘<opencv4android>\OpenCV-android-sdk\sdk\java’ (Instead we can just copy these modules from sdk path and add the library path in settings.gradle)

Import module

Specify SDK path to import OpenCV for Android

if there is a Gradle synchronizing error, change the ‘compileSdkVersion’ and others as follows in build.gradle file in openCVLibraryXXX (imported module) folder.

Change build gradle

SDK version should be matched with the one we have in the system

And add as a dependency to the ‘app’ module in File->’Project Structure’

Add dependencies.PNG

Add OpenCV library as a dependency to the application

Copy libraries of OpenCV from ‘<opencv4android>\OpenCV-android-sdk\sdk\native\libs’ into a newly created folder (jniLibs) in side the ‘app\src\main’ folder. (Otherwise you will have an error saying “OpenCV Manager package was not found! Try to install it?”)

If ‘namespace opencv is not bound’ shows for opencv:show_fps in layout/*.xml file, add ‘xmlns:opencv=”http://schemas.android.com/apk/res-auto&#8217; at the beginning.

Camera Configurations

This is an implementation of ‘<opencv4android>\samples\color-blob-detection’ in AS. When you run this application using emulator; since the camera is needed here; there can be times that the application will not run smoothly as some configuration on camera is needed.

Add web cam on the emulator using AVD Manager. Please note that, add only one (front or back) as shown

web cam back

Add web camera on AVD

Add web cam permission on AndroidManifest.xml file

cam perm on manifest

Add permission for camera to be used in the application

Check the AndroidManifest.xml file for names whether those are different as we are just copying samples on the built project (on mine, the package name, android:name, had to be changed).

You should add web cam permission on AndroidManifest.xml file to enable permission option in android devices. After that you can grant permission on the device itself.

Note that, in recent Android versions, you will have to grant permission explicitly as shown below unlike in the previous versions. Otherwise your application will be stopped unexpectedly saying ‘It seems that your device does not support camera (or it is locked). Application will be closed.’ For that go to Menu -> Settings -> Apps -> <My_Application> and click ‘Permission’. Then in the opening window, enable ‘Camera’.

Phone Application

Tap Permissions and enable Camera

Using An External Android Device For Testing

To use a device instead of emulator in AS, follow the instructions given in this. Sometimes the device might not get detected in Windows environment as USB drivers needed to detect them. Install required drivers using this.

Select Run -> Run ‘app’. In the ‘Device Chooser’ window, select ‘Choose a running device’. If it shows that the device is offline even after it is detected, enable ‘USB debugging’ option in your device. It should be enabled in order to work with the device.

In the recent versions of Android, you might not see ‘Developer options’ as it comes as a hidden feature. Go to Settings -> General -> About device. tap the ‘Build number’ section 7 times. This will show-up ‘Developer options’. You will find ‘USB debugging’ under ‘Developer options’.

When you run the application, you might observe that the camera is rotated 90 degrees. There is a solution to solve this. But this time it is “OpenCV Error: Insufficient memory ( Failed to …” error. To overcome this, I came up with these modifications.

mRgba = inputFrame.rgba();
Size sizeTemp = mRgba.size();
Core.transpose(mRgba, mRgba);
Core.flip(mRgba, mRgba, 1);
Imgproc.resize(mRgba, mRgba, sizeTemp);

This is the end of this blog and hope you have gained something out of this. Should you require any clarifications, feel free to comment below.

Building Java Wrapper For OpenCV

In my last post, I explained how OpenCV is built on Windows environment using MinGW. Here I will explain how OpenCV for Java can be built using MinGW.

Java has a wide range of proven coding practices such as high level abstraction, easy memory management and more. This is an endeavour of the OpenCV community to bring best of both worlds where C++ is used at the development level and Java (not to mention python) at the implementation level. And this has opened up more audience levels after OpenCV adapted Java and python languages.

It is of course easy to download OpenCV pre-built libraries for java. But pre-built files are only available for Windows and for other OSes, it is required to build from source. Further, the java bindings are only available for main modules in the opencv main repo and not for opencv extra modules, at least as of now.

What if you need opencv extra modules in java wrappers. Then you are left with only option. Build it from the source! This blog covers most of the part related to Windows environment, still you would be able to grasp the main steps involving in the process on any OSes.

Pre-requisites

Java path

Figure 1: Set System Paths for Ant & Java

  1. Download python 2.6+
  2. Download jdk 6+ as java binding for OpenCV needs it.
  3. Set java path as shown in the Figure 1.
    1. Set ‘JAVA_HOME’ variable under ‘User variables’
    2. Set ‘Path’ variable under ‘System variables’
  4. Download Ant (say at ‘ant-dir’)
  5. Set ant path variables (refer Figure 1) referring this
  6. Download CMake
  7. Download MinGW and set path

CMake

Follow cmake variable settings under CMake Configuration in the previous blog. Additionally uncheck ‘BUILD_SHARED_LIBS’ under ‘BUILD’ category as shown in the Figure 2.

No shared

Figure 2: Uncheck shared library option

Once you ‘Configure’ (please select MinGW Makefiles and click’Finish’), you will see a new option, ‘BUILD_FAT_JAVA_LIB’ which specifies that a java wrapper will be created for all OpenCV libraries enabled.

For me, whatever the reason (if you know, please point me out) the set paths for ant, java and python had no effect on cmake and they were not picked at the configuration. You can check it after ‘Configure’ in the generating window whether Java wrappers: is set to YES or NO. If it’s YES, you are through and simply ignore the following steps and just click ‘Configure’. But for me it was NO. Therefore I had to manually set paths using cmake-gui.

Ant path

Figure 3: Set Ant path in cmake

Give the path ‘<ant-dir>\bin\ant.bat’ for ‘ANT_EXECUTABLE’ under ‘Ungroped Entries’ as shown in the Figure 3. (Don’t worry on the other un-set paths under ‘Ungrouped Entries’ as those are not required under this topic)

Set java paths in cmake-gui as shown in the Figure 4.

Then ‘Configure’, you will see ‘BUILD_opencv_java’ is checked under BUILD category.

 

Java path cmake

Figure 4: Set Java path in cmake

MinGW

Follow the steps given under MinGW Compilation in the previous blog.

Note: Strangely mingw32-make has invoked ‘jre’ path instead of ‘jdk’ though I have set paths in both system and cmake-gui correctly. It searches for tools.jar in ‘<jre-dir>/lib’ folder while it is of course resides in ‘<jdk-dir>/lib’. I copied tools.jar and pasted inside ‘<jre-dir>/lib’ to proceed with the building process. (I know this is not the recommended way of doing this. But for the moment it worked for me. If you know a better way of doing this, please point me)

OpenCV Extra Modules

Follow the steps given under OpenCV Extra Modules in the previous blog.

Though the above steps will include extra modules in ‘opencv_java’ library, it will not bind java libraries with .jar file. For this you will have to specify in each of your preferred extra modules that you need java binding at .jar file.

E.g.: If you need ‘text’ module in ‘opencv_contrib’,

  1. Open file <opencv_contrib_path>\modules\text\CMakeLists.txt> and search for the line containing ‘ocv_define_module’ option.
  2. Add ‘java’ at the end of the module list as

ocv_define_module(text opencv_ml … WRAP python java)

If the ‘ocv_define_module’ does not contain ‘WRAP’, then include that before ‘java’. Otherwise you will probably have an error saying ‘cannot find -ljava’.

E.g.: ‘text’ module contains ‘WRAP’ while ‘adas’ does not. Therefore add ‘WRAP java’ as follows in <opencv_contrib_path>\modules\adas\CMakeLists.txt>

ocv_define_module(adas opencv_xobjdetect WRAP java)

 

Using Java Wrappers in Eclipse

The above steps will generate two important files inside ‘<opencv-build>\install\java’. One is ‘libopencv_java300.dll’ static library containing all opencv built libraries. The second is ‘opencv-xxx.jar’ executable file which can now be used as the java binding interface by Eclipse. Please refer this site to configure Eclipse environment in order to use ‘opencv-xxx.jar’ for java wrapping.

 

This is the end of this blog and hope you enjoyed it. It would be great if you can point me mistakes, if you have tackled any. Thank you.

 

Building OpenCV Using MinGW on Windows

Background

OpenCV is a rich computer vision open-source library initially maintained by Intel Inc. and now governed by itseez. Development of OpenCV is generally based on C and C++ before it is implicitly said that C API is almost deprecated. By the way python and Java wrappers are also available to make use of rich functionalities and efficient coding style in respective languages. Further, OpenCV supports cross platforms ranging from PCs to Mobile devices while supporting various GPU acceleration optimizations such as CUDA and OpenCL as well as TBB, etc.

opencv-logo

The main objective of this blog is to present the way in which OpenCV 3.0.0 is built on Windows environment. Though there are so many articles, blogs and forums covering this basic topic, I saw a lack of integration of building OpenCV with MinGW for platforms (Eclipse, Netbeans, etc.) other than Visual Studio. If you also have faced the same issue then the below steps are for you.

Download

  1. Download or clone OpenCV and unzip to a desired destination (say ‘opencv-source’).
  2. Download OpenCV Extra Modules (as say ‘opencv_contrib’)
  3. Download cmake binaries and unzip into a desired location (say ‘cmake-binary’).
  4. Download and Install MinGW as a fresher (say at ‘mingw-dir’)
  5. Set PATH of MinGW (otherwise you would have an error saying ‘libgmp-10.dll is missing’)

CMake Configuration

96px-cmake-svg

  1. Run ‘cmake-gui’ in <cmake-binary>/bin
  2. Enter <opencv-source> path in the ‘Where is the source code:’ text box.
  3. Enter the newly created build directory’s path (say ‘opencv-build’) in ‘Where to build the binaries:’ text box.
  4. Check both ‘Grouped’ and ‘Advanced’ radio buttons in order to navigate options available in cmake-gui easily.
  5. For the moment disable IPP by unchecking the WITH_IPP under the WITH category (otherwise ‘cannot find -lRunTmChk’ error is expected when building OpenCV)
  6. Click ‘Configure’. In the popping window, select MinGW option under ‘Specify the generator for this project’ and click ‘Finish’.
  7. Wait until the configuration is done and then click ‘Generate’.Now you have properly configured OpenCV and check whether you have cvconfig.h inside the ‘opencv-build’ folder.
cmake-gui

CMake GUI configuration setup

MinGW Compilation

  1. Before building the project it is advised to uncomment ‘add_extra_compiler_option(-Werror=non-virtual-dtor)’ option in ‘<opencv-source>\cmake\OpenCVCompilerOptions.cmake’ file in order to avoid errors related to ‘[-Werror=non-virtual-dtor]’ while mingw building process.
  2. Go to <mingw-dir>/msys/1.0 and run ‘msys’ bash file.
  3. Navigate to <opencv-build> path and run ‘mingw32-make’ command.
  4. Run ‘mingw32-make install’ after build is successful.After successful building and installing of OpenCV go to <opencv-build> folder and copy generated files to respective destinations as follows.
  5. Copy two folders in ‘<opencv-build>\install\include’ to ‘<mingw-dir>\include’
  6. Copy files in ‘<opencv-build>\install\x86\mingw\bin’ to ‘C:\Windows\SysWOW64’ if your system is 64-bit, otherwise ‘C:\Windows\System32’
  7. Copy files in ‘<opencv-build>\install\x86\mingw\lib’ to ‘<mingw-dir>\lib’
msys

OpenCV modules are compiled using MinGW

OpenCV Extra Modules

Additionally if you need extra opencv modules such as Text, Face, etc. modules categorized under separate repository follow the below steps.

extra module

Specify OpenCV Extra Modules path

  1. Unzip ‘opencv_contrib’ modules zip file at a desired path (say ‘opencv_contrib_path’).
  2. Again open ‘cmake-gui’ and select ‘OPENCV_EXTRA_MODULES_PATH’ under ‘OPENCV’ category and insert ‘<opencv_contrib_path>\modules’
  3. For the moment deselect ‘opencv_bioinspired’ module under the ‘BUILD’ category as it crashes unexpectedly.
  4. It seems some API mismatch in the system generates ‘<opencv_contrib_path>\opencv_contrib-master\modules\ximgproc\src\spar
    se_match_interpolators.cpp:171:52: error: ‘const class cv::_InputArray’ has no m
    ember named ‘isVector” error. Therefore try to ‘Configure’ and ‘Generate’ after commenting out the lines as follows.    CV_Assert( !from_points.empty() && //from_points.isVector() &&
    !to_points  .empty() && //to_points  .isVector() &&

 

These are the ways and resolutions I have made on the way to the successful OpenCV run on my Windows environment. If this helps you, I am quite happy with that or if you think I have done something silly, well, please point it out.

Bitcoin Mining in Programmers’ Point of View

Have you ever wondered why there is no airship roots to Australia from America through Pacific? Well, google it! If you found out the reason then try to comprehend that how people have mechanized to follow the rules and regulations 3rd parties have put in forward without considering the smart way of doing things. May be for our own good. Bit harsh naah! Anyway this post is to eleborate concepts behind Bitcoin in a nutshell which is another endeavor to overcome intermediaries. Lets have a quick introduction to this before get into the programming stuffs.

Bitcoin: Bitcoin is a form of digital currency which was found out recently as a medium for exchange over the air. This is mostly used in transaction processing and validation on peer-to-peer network exploiting cryptographic operations with the specifications and software of open source community. Bitcoin’s total base money supply is valued at $125 million currently. Bitcoin is simply an “SHA-256” hash in hexadecimal format (256 means the number of bits associated) and will also include private/public key known only to the user which is used to spend the coin.

cryptography-hash-function

Fig1. The input is cryptographically converted fixed sized string (hash value)

SHA-256: A bitcoin is exchanged or spent by a payee to a payer using an SHA-256 hashed address which points to a wallet (file) containing the bitcoins. First a group of transactions are broadcasted to bitcoin peer-to-peer network for validation. This will be continued until one node is said to found random SHA-256 hash which starts with a specific number of 0 bits. This is to limit the amount of computing power searching a suitable number. This hash is then coupled with a “nonce” which is a user adjustable number starts from 0 to 232. This will be again broadcasted in the peer-to-peer network and combined with the previous completed block hashes to generate a unique hash. As a reward for the node which created the hash, bitcoins and/or transactional fees will be granted.

Capture

Fig2. A customized bitcoin miner

Miners: The nodes which attempt to generate hash are called miners. They are generally implemented on digital design with GPU, FPGA and ASIC since the high computational overhead which makes normal PCs unable to cope with it. The generation is controlled to 10 minutes to comply with the standard controlled rate for hash block generation. The above figure shows a customized bitcoin miner employed to do mining.

To have a visual idea on how Bitcoins involve in transactions play the below video.

Programming: Well, now lets look at the programming aspect of this from the top view.

1) Collect all the block headers (transactions) you have in the peer-to-peer network.

2) Calculate the Merkel root using all the transactions by hashing them. This is done through Merkel tree.

3) Construct a new block header appending Merkel root, previously constructed hash, a ‘nonce’ and information like version, current time stamp and ‘target’ (difficulty). Altogether 80 bytes.

4) Hash this block header using SHA-256 algorithm two times by splitting into two parts.

5) Compare the newly generated hash (32 bytes) with the ‘target’. If the target requirement is met then the generated hash is taken as a valid one and broadcast to all other peer nodes to verify and save in a public ledger for future use. If the target is not met, then go to the step 3) and increment the nonce to hash it again until meet the target. The target is set according to the processing speeds of the miners in the network.

SHA-256 hash function plays a vital part in this algorithm. From Merkel tree construction to the iterative generation of hashes to meet the target, SHA-256 is called for thousand times. To reduce the number of calling the ‘Midstate‘ is calculated. A self-explanatory python code is also available. But it is hundred times slower when implemented on a CPU.

There are number of digital designs deployed to play the job of miners. Interestingly now the miners are ever easy to construct thanks to opensource + high level languages (C, C++ and SystemC) + HLS tools available. HLS stands for High Level Synthesis where it allows the programmers to code the logic in their favorite languages (C, C++) and synthesis into a HDL (Hardware Descriptive Language) like Verilog or VHDL. This is such an endeavor.

Bitcoin Miner Using Zedboard: There is already a plethora of implementations of miners to date. But to have a great efficiency, it is evident that even the open source community is marching towards digital designs to exploit parallelism massively. This project mainly focuses on making “proof-of-concept” Zedboard which rides a fully functional Bitcoin miner for the open source community. The functions of the code are illustrated in this link.

how-bitcoin-works-workflow-1

 Fig3. Another illustration of bitcoin transactions (please zoom if cannot read)

Below is a test bench written using Vivado HLS Tutorial format. The C code available in the above repo can be verify using this. To do that, save this code in a file (sha256_tb.c) in the directory ‘sha256.c’ lies.


#include <stdio.h>
#include <math.h>
#include "sha256.h"

int main() {
 FILE *fp;
 SHA256_CTX *ctx;
 uchar data[64] = "This is a test"; // Input
 uchar hash[64];                   // Output

 fp = fopen("out.dat", "w");
 sha256_top(data, hash);           // Calling sha256_top function

 int i;
 for(i=0; i<32; i++){
  if(hash[i] < 16)fprintf(fp, "0"); // Regulate the length ofbyte size
  fprintf(fp, "%x", hash[i]);
 }
 fprintf(fp, "\n"); fclose(fp);

 printf("Comparing results against golden output\n");
 if(system("diff -w out.dat out.gold.dat")){
  fprintf(stdout, "--------------------------------------------------\n");
  fprintf(stdout, "FAIL: Output does not match with the golden output\n");
  fprintf(stdout, "--------------------------------------------------\n");
  return 1;
 } else {
  fprintf(stdout, "-----------------------------------------------\n");
  fprintf(stdout, "PASS: The output matches with the golden output\n");
  fprintf(stdout, "-----------------------------------------------\n");
  return 0;
 }
}

Generated hash value: d7316dc4154e2861218bbb31b54e32e24e0667d0de321c4a108acb8fbf82e7c2

Save this hash value in a file ‘out.gold.dat’ along with the test bench. This contains the verified output for a given input (in this case a string, “This is a test”). Note that just having an output does not mean the end of process, rather it should be validated using ‘proof-of-work‘.

The readers are kindly advised to go through the codes and understand the concepts and algorithms. Since you have a code with HLS compatible, if you have a little knowledge on Vivado HLS you can end up with an SHA-256 IP too. Note again that the code in the repo may not be efficient to date. Yes? Why can’t? You can modify the code to comply with the latest development in miners.

It would be great, if you can go through the resource links that I have linked in this post. There are number of concepts under this topic which I have not covered like double spending, block chaining, security methods, etc. Finally I hope that you might have at least convinced yourself that I have written something. Something beyond that; oh; I am done!!! Feel free to raise any question under this topic.

Industrial Training at Atrenta Lanka (Pvt) Ltd as a Verification Engineer

R&D Center in Grenoble, France

R&D Center in Grenoble, France

Atrenta Inc. is a SoC realizing company which embodied with number of sales offices, R&D centers and support offices in several locations across the globe. Company provides software solutions which aims to improve design efficiency of complex Systems on Chips (SoC) in terms of performance, power, area, etc. at an early stage. The main customers are world’s leading semiconductor and consumer electronics companies such as Intel, Fujitsu, IBM, etc. since they are keen on finding the least expensive path to silicon. This is a presentation on how I interacted with Atrenta in brief.

It was 18th of November in 2013. I stepped into a building situated in Lukshmi Garden near Borella junction, Sri Lanka and went straight to the multinational company at 2nd floor where I was interviewed for the industrial training and got selected. After a little delay at the reception, the Administration Manager came and asked me to come inside the office.

Sri Lankan Branch Map

Sri Lankan Branch Map

It was a dream venue that I wanted to work with. The internship position came was not a just-in-time incident. It had a long process which gave me the real experience in recruitment. Once the internship period was announced, first of all I tried to comprehend the profession which matches me most with the strength and weaknesses I have. At the end I have decided to pursue my career in Digital Design or Biomedical Engineering. Before I applied to Atrenta I communicated with people who had experience in these fields. They convinced me that Atrenta would be a one-off opportunity that I should consider for my future career. So I decided to raise my hand on applying Atrenta with three other batch mates for two positions that Atrenta offer.

I had been there before few weeks back for the interview where I had one technical exam related to Digital System Design plus a face-to-face interview. At the technical exam, I could comfortably answer all the questions. With that I was taken through stiff questionnaires from a senior verification engineer in the company including technical aspects. (Motivation – If you are to face a technical exam prior to an interview, what all you want to do is, pump your knowledge with every bit of grain and answer them confidently, perhaps more intelligently. If anything went wrong in the exam, don’t be disappointed. You would probably be able to rectify it in the very next interview as most of the time they will ask questions related to that. Perhaps talk little bit more. Easy nah! ) Later we have informed that they have only recruited me and only me. That was a wordless moment. Initial information related to two positions was misinterpreted by our batch as one position was intended for an undergraduate from Computer Science Engineering Department. By the way, I was impressed to have this one-off opportunity where I believe that it helped me out in sorting my future career in to this particular field. Oh I have gone bit into the flashback.

Where I was? 18th right. I was provided with all the necessaries that a new employee would get. It included a separate laptop, power cable, battery, work station, telephone with extension (later), etc. In addition to that inventories like FPGA boards and technical facilities like separate central server location, software (SpyGlass, GenSys) access, separate mail id, conference call facilities whenever needed, etc. were also provided. At the induction we were introduced to the personnel in side the company and from that on wards they started to consider us also a part of their organization family.

Working at Atrenta is not only about coding

Working at Atrenta is not only about coding

It was a 24 weeks of short term technical training which we were advised about. But the time I spent in Atrenta did not make me feel uncomfortable by any means. The exposure I got within nearly a six months period of time is immense and impossible to sum up here. Being in Atrenta family gave me the real exposure on evolving a company of this magnitude to a higher margin in the industry. Though Atrenta started with few engineers in November of 2012 the number of graduated and skilled employees working at Atrenta has now grown up to 50. This evidences the fact how attractive the company was of hiring high skilled workforce in this short period. During my stay the company was expanded their workplace by fabricating and partitioning another floor in the same building. It was followed by a change in working place which made the working place more enthusiastic.

New floor

New floor which was expanded

Every personnel in the company assisted the interns wherever it was needed. We never experienced a gap between senior employees and interns. Such was the culture they had been evolved with. The collaborative native in discussions, conferences, meetings, etc. paves the way of building my professional career. The team including officials from local office and overseas gave me the feeling of being in a team in industry level. By 2nd of May in 2014, we could successfully complete the 24 weeks industrial training. The works I carried out in between are overviewed in the Industrial Training Report and the Training Presentation attached herewith.

Industrial Training Report

Training Presentation

Electrocorticography – From Bio-medical Instrumentation Perspective

This is an invasive medical equipment which was pioneered by the Wilder Penfield and Herbert Jasper in early 1950s.[1] Even though this gives the same functionalities what EEG gives, in practice the electrodes of this device are surgically connected to the brain to accurately record the signals from cerebral cortex. In result neurosurgeons are more confident about the place where seizure activity occurs and removing diseased tissues.[2] ECoG gives temporal and spatial resolutions approximately 5 ms and 1 cm which are much higher than that of EEG.[1]

 Surgical placement

Figure: Surgical Electrode Placement

Functions

ECoG electrodes

These electrodes consist of sixteen sterile, stainless steel, carbon tip, gold or platinum ball electrodes. The electrodes are then connected to a overlying frame in a ‘Crown’ or ‘Halo’ configuration. Recommended spacing between electrodes is 1 cm where individual electrode is 5 mm in diameter.[1]

Electrode config

 Figure: Electrode Configuration

Electrode placement

Placement is performed either by ‘Craniotomy’ or ‘Burr holes’.[2] Craniotomy is a surgical procedure where a part of skull is removed in order to expose the brain surface where Burr holes are small holes drilled in to the skull in order to place the electrodes. Since injuries would pave a severe threat on the subject, both insertion and removal of the electrodes are performed in an operating room by a neurosurgeon. Electrodes are placed just below the Dura mater (outer cranial membrane) which gives the flexible functioning which doesn’t cause injuries due to normal brain movements.

Monitoring

Epilepsy which is a collection of seizures is primarily intended on monitoring using ECoG. The connected electrodes are then plugged in to EEG equipment where to monitor the seizures activities in Intensive Care Unit. In a typical usage the subject would be instructed to contract his arm thereby creating an action potential in Cortical Pyramidal Cells.[3] This will be conducted through several layers such as Cerebrospinal Fluid, Pia Mater and Arachnoid Mater to reach the electrodes.

As this proceeds, the recording is done on specific electrodes for the coincident neural activities. The plotted responses are in high Gamma band (66-144 Hz) and they are colour coded for the convenient inspection. The aggregation of all the simultaneous signal (128) is also generated to get the sense of how the neural activities are evolved.[3]

 

Clinical Usage

Primarily ECoG is used in the following places.[1]

  1. During pre-surgical planning, to localize the epileptogenic zones
  2. To take the blueprint of the cortical functions
  3. To evaluate the success of the epileptic surgical resectioning
  4. For the research aspects

EEG vs ECoG domains

Figure: EEG and ECoG comparison

Even though ECoG gives greater flexibility at stimulating and recording signal altering electrodes before a surgery, during a surgery and after a surgery, it has its own limitations which should be dealt with such as limited sampling time, limited field of view and influence of anesthetics, surgery itself, etc. Epilepsy varies widely with the etiology, clinical symptoms and the origin site of the brain.[1] This emphasizes the importance of localizing the epileptogenic zone precisely and accurately than what would be gained using EEG. ECoG can be performed in either of the following forms.[1]

  1. In the operating room during surgery (Intra-operative ECoG)
  2. Outside of surgery for pre-surgery planning(Extra-operative ECoG)

Intra-operative ECoG

This is particularly useful when the resectioning surgery is undergoing. During the surgery, this can be used to monitor the alleged tissue’s epileptic activities and to ensure whether the resectioning procedure removes the alleged tissue completely. The objective of resectioning surgery is to remove the epileptogenic tissues which cause unacceptable neurological consequences. In identifying and localizing the epileptogenic regions, DCES (Direct Cortical Electrical Stimulation) comes into play as it is a valuable tool for functional cortical mapping. It helps to localize the critical zones that must be left over in order to preserve sensory processing, motor coordination and speech.

extra ii

Figure: Before a resectioning surgery

Extra-operative ECoG

Before a patient undergoes a resectioning surgery he/she should be identified as a possible candidate by demonstrating the presence of structural lesion using MRI and EEG. Once the presence of lesion is approved ECoG is performed to identify the exact place and extent of lesion and surrounding. As described above the Scalp EEG lacks the precision and accuracy for localize the region and hence ECoG data is assessed on ictal spike activity recorded during a seizure recorded between epileptic events.

Research applications

ECoG has been found useful in research applications recently where it can be utilized as an accurate recording technique for use in Brain-Computer Interface (BCI). BCI is a neural signal interface which can be used to drive the prosthetic, electronic and communication devices using individual’s brain signals. Brain signals can be recorded invasively and non-invasively. Here ECoG serves as a hybrid since it does not penetrate the blood-brain barrier like other invasive recording devices.

 

Safety Concerns

ECoG is basically considered as an invasive instrument as it directly interacts with the brain where the ECoG’s electrodes are typically placed on the Dura surface of brain after a surgery. The minor mistakes on the insertion and removal of the electrodes would cost severe damages on the subjects’ brain structures. Therefore it is highly demanded to keep the subjects as well as the instruments well occupied as to produce the desired outcome of the procedure.

 

Risk Assessment

A maintenance strategy worksheet has been proposed by University of Vermont Technical Services Program[4] to carry out Risk-based assessment of a medical device. This determines the frequency of inspection of the device. The below table shows how the rating of each criterion is made.

In ‘Clinical function’ criterion, the device is evaluated for the invasiveness to the subject. The ECoG is considered to be a device which is used to monitor the subject’s brain activities directly. Therefore this category would score 3. ‘Physical risk’ criterion determines the risk associated with the device failure where ECoG will score 3 because device failure would cause misdiagnosis or loss of monitoring. But errors made in electrode placement would cause severe damages to the brain and hence someone could rate this as 4 in this criterion. But for the time being the device is evaluated for the functioning failure and hence rated as 3. ‘Problem avoidance probability’ evaluates the relationship of failure to the historical data of the device. Since ECoG is intended on the precision of the data, the device is recommended to be tested between specific time periods. In each such inspection the results are recorded and compared with the actual results. Historically Common device failures are not very predictable and therefore ECoG scores 2. ‘Incident history’ looks back the history to find out whether there are incidents happened due to device failure. This would score 1 whereas ‘Manufacturer requirements’ criterion scores 2 as the device is regarded as a sensitive device and hence they demand a specific schedule for the inspection. This gives a total score of 12 where an annual (1x) inspection interval is recommended as functional test frequency.

Risk Assessment for ECoG

TECHNICAL SPECIFICATION FOR ELECTROCORTICOGRAPHY

Electrocorticography-presentation

References

[1] http://en.wikipedia.org/wiki/Electrocorticography

[2]http://keck.usc.edu/en/Education/Academic_Department_and_Divisions/Department_of_Neurology/Patient_Services_

and_Clinical_Programs/USC_Comprehensive_Epilepsy_Program/Resources/About_Procedures/Intracranial_EEG.aspx

[3] http://web.stanford.edu/group/nptl/cgi-bin/site/node/7

[4] Fluke Biomedical – University of Vermont Worksheet

[5] http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3017949/

[6] http://jn.physiology.org/content/102/4/2563

[7] http://www.downloadplex.com/Scripts/Matlab/Development-Tools/Specifications-electrocorticography-intracranial-eeg-visualizer_499947.html

[8] http://journal.frontiersin.org/Journal/10.3389/fnins.2011.00005/full

[9] http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6107721

[10] http://www.schalklab.org/sites/default/files/misc/Brain-Computer%20Interfaces%20Using%20Electrocorticographic%20Signals.pdf

 

“If we look inside the atom, any atom, we will see a sun in its core.” – Ali ibn Abi Thalib (RadhiyALLAHu anhu)