Sunday, January 15, 2017

F1 score 2017.01.15

I went on line and did a little googling about confusion matrix calculations. I took about 75 images and ran them through the program. I came up with columns in a chart for true detection of a face, false-positive detection, and false-negative detection. I followed the formula for the F1 score, not strictly speaking a confusion matrix, but interesting to calculate. I come up with a 0.44 score. It's not that good.

Wednesday, January 11, 2017

Convolution visualization 2017.01.11

Here's a graphic that visualizes the weight vector for the first convolutional layer of one of the neural networks in the facial detection project I have been working on.
There are 32 boxes and a 5x5 image in each box. This is after 5 or 6 runs of the training function.

Thursday, January 5, 2017

awesome tag revisited in python 2017.01.05

Here I include some text from the ReadMe file in one of my github projects. I previously scrapped the java version of the project, but recently I started up the project again in python, and now I like my results much more. The link to the site is here: https://github.com/radiodee1/awesome-tag

This is the ReadMe file, and it is followed by some screen shots.

awesome-tag

Facial detection and tagging experiments -- this project was not working for several months. Presently, though, it does a better job of detection than it did. It does not, however, do that detection any faster than before. It presently takes several minutes for each image.
The method used for detection can be described as follows. All the code is written in python. First a program devides the image into small boxes, 7 pixels to a side. Then a simple two layer neural network is used to determine which of these boxes are areas of the image where a face might be. This layer basically detects skin tones. Then an aggregating program is used to draw together squares that are immediately adjacent. Then a more sophisticated convolutional neural network is used to determine which of these resulting boxes is actually a face.
Tensorflow is used to implement the neural networks on the author's computer using the gpu on that computer. Because Tensorflow is used there was a conflict in importing in python the TF library and the Gtk library at the same time. To get around this a separate program was created for the TF functions that is called by the gui using the 'subprocess' functionality.
When the experment was tried for the first time Java was used. This worked poorly and good results were never fully realized. The java code is included in the github repository for completeness. Also included in the repository is some python code that uses tensorflow to work on the MNIST dataset, just as is suggested by the Tensorflow 'getting started' page.
The training data that I use came from www.nist.gov . They have a dataset of labled facial images for a facial recognition challenge (note, this is for facial recognition, not really detection). The dataset is called the 'IJB-A' dataset. The download is large and you must apply for permission to access the download page. I beleive most students of computer science would be approved, if you are clear that you are interested in research.
You are required to add the following text to any publication: This product contains or makes use of the following data made available by the Intelligence Advanced Research Projects Activity (IARPA): IARPA Janus Benchmark A (IJB-A) data detailed at http://www.nist.gov/itl/iad/ig/facechallenges.cfm .
The first step to using this project is downloading the dataset. Then the working environment must be set up. The project uses, among other things, a dot-folder in your home directory called '.atag' for storing the location of the IJB-A database as well as the location of several of its own files. Then you must train the two databases on the images in the dataset folders. Then you can go ahead and test the facial detector on the images in the database or any image of your own.
---


In the shot above the red box designates 'detection'. Here the computer has trouble with the cork board in the background.

Here I use one of the stock photos from the downloaded database of photos. Several faces are detected, and one or two are not.

Saturday, November 12, 2016

Finding a Path 2016.11.12

I haven't blogged here in a long time. Here's some info about a project I've been working on.

Recently I've been toying with Google's Tensorflow python library. I've written an op for the cpu and the gpu. The op, of course, is a modified version of the Dijkstra algorithm. The funny thing is that the cpu version works faster than the gpu version. There seems to be a long overhead time, something like 7 seconds, before a gpu op begins to work. There is also a size restriction. My op works on grids. On the cpu version I can use a large grid. I've tried 480x480. On the gpu the op fails if the grid is larger than 70x70. What do I come away with? Well, I enjoyed doing the gpu version, but the cpu version ends up being superior.


So, the code is here: https://github.com/radiodee1/awesome-tf

I also use pygame to construct my gui for testing with real png images.

Thursday, July 28, 2016

awesome-audio-cnn 'brain' file

In the last post I mentioned that I was making public a github project that I've been working on. The project is called 'awesome-audio-cnn' and it uses a neural network to select music for automatic playback on a simple server. The project 'INSTRUCTIONS.md' file suggests that you train your own neural network file for the server. On reflection I thought this was too difficult.

I have included in the git repository (mentioned in my last post) a file for operating the neural network that is at the heart of the awesome-audio-cnn project.

The file is 23.9M in size, so it will take a long time to download. What this means is that you don't need to train the neural network yourself. It allows you to get a faster start up. I should note that the file was developed at my location, on my audio collection. It may not act as you expect on your files. There is no provision, incidentally, for comparisons on the basis of tempo. The songs you get out of this neural net will not be the same with respect to tempo.

The inclusion of this file may mean that some with slow internet connection or limited disk space cannot use this project.

That said, someone with time on their hands might enjoy setting up this kind of server. The info in the file is provided on an 'as is' basis, without warranties or conditions of any kind.

The file is called 'fp-test.bin' and it is located in a folder called 'acnn-brain', for lack of a better title.


Instructions for awesome-audio-cnn

I recently made public a github repository I was working on called 'awesome-audio-cnn'. The code for the project is at github at https://github.com/radiodee1/awesome-audio-cnn . Below are the instructions that go with the project.

INSTRUCTIONS

This is a complex project. There are several steps required for a full implementation of this project. Some of the steps require resources not available to everyone. The ultimate goal is the implementation of a music server that employs a neural network to help it select which songs to play. It might be possible to implement only part of the project and render for the user a server that plays music, but without the neural network, to play selected albums in their entirety. This latter option has not been explored in this document. Finally, this project pre-supposes that the user is serving up their own music in their own house. The project is not for any sort of distribution of music, and is scaled to operate with a small library of personal selections.
  1. Arrange your music in the predetermined format. Music for this project must be in the mp3 format and must be arranged in the following manner. All songs are sorted by album and are stored in folders with the artist and album title as the folder's name. These album folders are all stored in a single larger folder, usually called 'Music' or 'music'. There is generally a jpg image in each folder for the album's cover image. This file is called 'cover.jpg'. Song files that are simply dropped in the 'Music' folder without an album directory will not be recognized. Furthermore all mp3 songs should be tagged using the id3 tag system. Good tools for using mp3 files are: picard, soundjuicer, and soundconverter.
  2. Download the sources for the server project. The project is currenly hosted on github and is called awesome-audio-cnn. Make a folder in your work area (or 'workspace' in eclipse terminology) for your neural network training data. You will use this folder later. For example, mine is at ~/workspace/ACNN/.
  3. Install the necessary supporting software. This may include, but may not be limited to, java8, libchromaprint-tools, tomcat8, activemq, and maven. For an IDE the developer used the Intellij IDEA Community Eddition. The IDE is used for syntax verification, and the mvn command line tools are actually used for building the project.
  4. Build the server and the desktop versions of the project. You can use the command ./make_music_for_war.sh /path/to/Music to set up the xml files in the project repository. You can also mount the music directory at the location /mnt/acnn/Music/ and leave the shell script unused. This mounting can be accomplished by modifying your fstab file. Use the command ./get_all.sh to see if your build environment is up to date. If the second script works you will have two files in your awesome-audio-cnn folder, one, the war file, called audio.war, and the other one called acnn-desktop.jar, the jar file.
  5. Use the desktop version to setup the working environment for training the neural network. You should start the desktop version with the command java -jar acnn-desktop.jar -train. The various peices of information are stored in the user's home folder in a folder named .acnn. You need several thousand songs for this training to go well. Identify the location of the music folder to the desktop version of the software. You must also identify the folder that will hold the training data that you work with. On my computer this folder is called ~/workspace/ACNN/. It could be called anything and could be located anywhere in the user's home area. If at any time you want to start over with the training, one of the things you should do is to erase the contents of the folder ~/.acnn.
  6. Use the desktop version to create the training 'csv' file. Originally the file is called 'myfile_1.csv' but you can change the name by clicking buttons on the desktop user interface. After you have set up all the directories and the filenames you should click the Make List button. This makes the csv list of your songs that is used in neural network training.
  7. Begin training your neural network. Training is an iterative process and requires the /usr/bin/fpcalc program installed with libchromaprint-tools. You press the buttons on the interface in a certain order and you watch the terminal that you started the interface in. Basically you press the Train button and watch the screen. When a certian time has expired you press the Clear-Break button. Wait for the neural network model software to save the model. Then press the Test button to evaluate your progress. Wait while the testing software goes through the test set. At the end of the test phase you end up with a score. The score starts out at something like 0.5 but will improve with extensive periods of training. Repeat this process (#7) until you get a testing score between '0.85' and '0.95'. I stopped at a '0.94' score. Two files are created in this process. They are named (originally) fp-test.bin and fp-test.updater.bin. The base of the name ('fp-test') can be changed to anything you like during training, but must have the name 'fp-test' when the web site is launched.
  8. Prepare to deploy the war file. This is one of the areas where the requirements of the project are very specific. The audio.war file is meant to reside on a tomcat8 server that is connected by a dedicated IP address to a wifi router. The server is connected by cat cable. This way anyone in the area of the router can access the server via the IP address and play the music stored there. The server is meant for private use. The war file expects to find your two neural network files in the folder /opt/acnn/. Here it will setup another folder /opt/acnn/.acnn/ which is in most ways identical to the one that the desktop jar file creates in your home folder. The /opt/acnn/ folder should contain the two neural network '.bin' files and also the file myfile.id3tag.csv. If this csv file is not present the program will try to create it. Maintaining this file is covered in the next step.
  9. Whenever you change the contents of your music file, and when you first setup your server, you must supply the audio.war file with a new copy of the myfile.id3tag.csv file. Start the desktop version of the program with the command java -jar acnn-desktop.jar -id3. After a few minutes the program will exit, leaving a new copy of the file in the training data folder. On my computer this would be in the ~/workspace/ACNN/ folder. Copy this file to the server computer and put it (with the permissions that will allow it to be read universally) in the /opt/acnn/ folder. There should be one entry in this csv file for every mp3 file that the server has access to.
  10. Go through the README.md document and make sure that you have edited all the tomcat and activemq configuration files to allow the server to do its job. For activemq type the following.
    $ cd /etc/activemq/instances-enabled
    $ sudo ln -s ../instances-available/main .
    For the tomcat8 server you must specify memory sizes for startup of the catatlina engine. Add the following line to the beginning of the catalina.sh file at the location /usr/share/tomcat8/bin/.
    export CATALINA_OPTS="$CATALINA_OPTS -Xms1024m -Xmx4g"
    For the tomcat8 admin interface, change the following to allow larger files to be processed. The web manager upload size must be changed. /user/share/tomcat8-admin/manager/WEB-INF/web.xml needs to have the following code:
    <multipart-config>
    <!-- 52MB max -->
    <max-file-size>52428800</max-file-size>
    <max-request-size>52428800</max-request-size>
    <file-size-threshold>0</file-size-threshold>
    </multipart-config>
    I added a zero to both 'max' numbers mentioned above.
  11. Set up the mysql database. To do this you should have root privileges. Type mysql -u root -p. When prompted enter the root password. Then type this:
    mysql> create database acnn;
    mysql> grant all on acnn.* to 'testuser'@'localhost' identified by 'fpstuff';
  12. Stop the tomcat8 webserver on the server computer. This can be achieved by typing sudo /etc/init.d/tomcat8 stop. Copy the audio.war file to the directory /var/lib/tomcat8/webapps/. Restart the server with the command sudo /etc/init.d/tomcat8 restart. Navigate to the server with your favorite browser on your favorite device and listen to your music.

Monday, May 2, 2016

Awesome Flyer 1.0.0.20160430

I returned to Awesome-Flyer to update the game. I added some stuff that connects the game to Google Leaderboards, and it seems to work well. I also corrected some screen size issues that were present if you played the game in portrait mode.

It all went reasonably well. The biggest part of the whole thing was switching from Eclipse, which I used when I started the game, to Android Studio, which is the current standard. This was all before I even started to address what was wrong with the game!

There's few if any real users of this game. Maybe I can change that in the future.

If you were a user of the game, and you update to the new version, you will have to clear the local game's cache for the new game to work. Otherwise the game will crash on you when you launch.

This is because I used 'integers' for saving the score in the old version of the game. The new version uses 'longs', and since I save the score in system preferences (application cache) and since these values changed from 'int' to 'long', the android operating system needs to be cleaned out before the new preferences can be saved over the old.

My only fear now is that the 'jni' library for the game may not work for all SDK levels. I have tried it on SDK 22, 23, and some others. I do not have many devices to test with, though, and I don't have money to buy test devices. Hopefully the native library (the 'jni') just works. If not, then only a small number of people will be able to use the game.