JS Ext

Monday, December 31, 2012

Feature request and update for Mele Fly Air Mouse

I have been using the Mele Fly Sky Air Mouse for about 2 months now.  So far, it is great.  I have only needed to recharge the mice twice each (I have two of them).  When the mouse gets low on charge, the mouse starts to jump around but is mostly usable.  There are times when a key would not work on the keyboard, but I think that may have been related to having a low charge (I don't use the keyboard frequently).  I love having the volume buttons on the remote.  Not all of the buttons map to Android keys, however.  I have run a test so I have all the key codes for the remote side of the device.  I will post them at some point.  The device packs light so I was able to bring it with me when I brought my MK802 to a family member's house.  I have had a few people try it out.  Most have struggled with it for a few minutes, then got the hang of it.  Most people struggle with the fact that is not a Wiimote, but it acts like one.  Before my cousin tried it out, I specifically told him he didn't have to point it at the screen.  He still pointed it at the screen.  He caught himself doing it 3 times.

The sensitivity in a device like this will always be a battle.  The only time I find it difficult to use is when I need to pinpoint a small button on the screen.  It can be difficult to hit the bulls eye.  You can get very close, but moving it that few pixels (yes, you saw that right, pixels on a 1080p TV) to get it from thumbs down to fullscreen in the Youtube app can be problematic.  I blame this more on the app then the mouse; I have blogged about those apps before.

On to the feature requests.  I have ordered them from the nice to haves to the most important essential changes.



1) The Home button doesn't take you to the home screen

I use Android with this mouse.  I understand the fact that this mouse was designed as a generic USB HID device, but it would be nice if the Home button acted like the Android Home button.  This isn't essential for me, because I tend to stay in one app 90% of the time, but it would help.  On a related note, the Mute button doesn't actually mute.  It does send a keyboard event, so Android apps can intercept the Mute button.

2) Mouse wheel

Scrolling up and down can be a pain.  Using the drag gesture doesn't work well with large lists on this device.  Having a mouse wheel could help drastically.  I think one would fit on the left side of the remote, just to the left of the DPAD.  It would stick out the side of device.  You can then use your left hand or your right index finger to scroll the wheel.


3) Swap Enter and Left Mouse Click.

The Enter button is above and to the left of the DPAD.  The left mouse click is inside of the DPAD.  The natural flow if you are using the DPAD is to hit the button inside of the DPAD.  What you will find is that you end up selecting something totally different because the mouse was over one button while the highlight from the DPAD is over another.  This is unnatural.  My wife and I have gotten used to it, but it needs to be swapped

4) Add media buttons

I absolutely need the ability to pause, play and skip on the remote.  Right now, I have to move the mouse to the on-screen pause button.  This is slow and requires effort to hit the button.  If I could request one feature for the next version of the mouse, this would be it.

Friday, December 28, 2012

Pandigital One-Touch Scanner (PANSCN04)

I have been searching for a fast way to scan a large collection of photos.  While in Microcenter purchasing Christmas gifts, my wife noticed the Pandigital Photolink One-touch Scanner.  It was only $30.  At that price, I was very skeptical, since the $100 Xerox didn't work at all.  For $30, though, it was cheap enough
to take a risk.

When reviewing various scanners, you tend to get a feel where the problems occur.  For sheet feeding scanners like this one, a real fear is blurriness caused by the picture rotating.  This occurs when you don't feed the picture in straight and you adjust it while its scanning.  The next fear has to do with scanning directly to an SD card.  If you scan to an SD card, you never have an option to review the pictures.  This is also what makes scanning with devices faster.  After you scan one picture, you are ready to scan the other.  You don't have to worry about scanning areas, cropping or slight rotation.  The device should take care of that for you.

I decided to scan about 5 pictures, then check the image quality on a computer.  I was very happy with the results.  The scanner only scans at 600dpi, so don't expect super quality scans.  The guide that you put the picture against worked well, so there were no slight rotation problems.  It chose the correct scanning area.  My litmus test succeeded.

Years ago, my future father-in-law gave my then girlfriend a bag full of pictures.  I think it was a compromise on him buying her a scanner;  She was supposed to scan all the pictures.  She never did.  Eventually, she saw the pain I went through to scan all my high school pictures using a flatbed scanner.  It took me weeks, and a lot of the pictures were slightly rotated.  This discouraged her even further.  After my litmus test, I decided to scan her father's pictures.  I expected to spend an hour here and an hour there for a while to get them scanned.  There were 8 envelopes.  Some were single rolls while others were multiple rolls.  It took me about an hour and a half to scan ALL the pictures.  At that point, the $30 paid for itself already.  After scanning a picture, you only have to wait about a second to scan the next picture.  When I first started scanning, I tried to feed the next picture too fast.  The orange light blinked red to tell me it wasn't ready yet.  I found that if you move the just-scanned picture to a face down pile, that is enough time for the scan to complete.  When I say a second, I really do mean a second.  Once you get into the groove of scan, put into pile, scan, put into pile, you will find that you make progress really fast.  I am planning on flying to a family members house in the coming months.  I will be bringing the scanner.  The goal is to scan as many pictures as I can; hopefully the entire collection (well, the ones less than 5 inches wide at least).

There are some limitations for the scanner.  The one I got only accepts pictures that are 5 inches wide, max.  The box advertises supporting a max for 5x7, but it will support longer than 7 inches.  I think it just advertised that to make it easier to understand to the end user.  I had a 5 inch tall panoramic that scanned just fine.  I don't know the actual max length.  Its better for advertising to undersell than oversell.  600dpi is good enough quality to look at.  You can put it on a big screen TV if you have family members over to look at a bunch of them.  The device was designed so you can scan directly to the SD card that a digital picture frame uses.  I understand that a lot of people want higher quality.  At times, I want higher quality too.  This scanner is great for scanning your entire collection (that is less than 5 inches wide), so that you can more easily pick which pictures you want a scanned in a higher quality.  When someone asks for a higher quality, you can find the picture and scan it with your flatbed.

If you decide to purchase this product, I recommend that you do some price comparisons.  Normally I post the Amazon link directly to the product because it is the cheapest price.  In this case, I'm posting the Amazon link to show you what it looks like, and so you can read more reviews.  I purchased it from an actual store for less than 1/2 of the Amazon price.  I did not find it on their online site, though.  I tried doing a Google search and the price ranged from $30 on Ebay to $100 on Overstock.  Amazon was half way in between those prices.  The exact model I got was the PANSCN04.

I read through the reviews of the scanner on Amazon.  Most are positive.  I did not notice the grey line problem that a few reviews reported.  I did have to calibrate the device or use the protection sleeve so I don't know if that effects the scan.  Below is a Tiger Direct review of an older model that only supports 4 inches as opposed to 5 inches, but they are very close.



Thursday, December 27, 2012

Xerox travel scanner

My grandmother had collected a lot of pictures over the years.  I wanted to scan the collection, but there were a lot of pictures.  I only have a flatbed scanner, which takes a long time to scan pictures.  I decided to find a feeding scanner so that I can spend as little time as possible scanning the pictures.  I settled on Xerox Xtravel 600 dpi travel scanner.  The reviews were mixed, but some people speculated that they weren't reading the manual.  They stressed that you must read the manual before doing anything.

I purchased it thinking that if I read the manual, it should be fine.  I started having issues immediately.  The Windows XP machine didn't like the scanner.  I tried to use my Linux laptop without success.  The scanner would scan, but all the images would be really dark.  I couldn't get it to calibrate.  Finally, I tried a Windows 7 machine, and I had the same issues that I had with Linux.  I had similar issues to some of the reviews, and I did follow the instructions.  This leads me to believe that the product is just crap.

I purchased the scanner in July of 2012 for $100.  I just checked today, and it is selling for around $150.  Amazing that the price went up for a product that doesn't work.


Wednesday, December 26, 2012

MP4 Conversion Issues

I have been converting all of my avi files to mp4 files.  For the most part, everything has been going well, but I have been running into issues from time to time.  I see sporadic errors in the ffmpeg output about not being able to render a frame.  When I play that movie using the MK802 bundled HD player, I run into an issue.  Part way through the video, I hear a pop, the video freezes for a fraction of a second, then the audio sync is off about 1/2 a second.  The pop does not cause the file itself to have audio sync issues.  If you rewind then immediately fast forward, you get right back where you are and the audio sync is fine.  This tells me there is an artifact in that point of the file that causes the video player to mess up.  I haven't tried playing the mp4 files in MX Player or Mplayer (on Linux) yet, so I don't know if it occurs in all players or just the bundled HD player.

Monday, December 24, 2012

Converting old videos, Part 2

In a previous post, I talked about converting 8MM video to AVI files burned onto a CD.  Many people talk about how CDs and DVDs degrade and eventually lose data.  This Christmas, I decided to copy all the AVI files off of the CDs.  Luckily, none of the files got corrupted.  I was able to convert all 22 files to MP4 files encoded in H264.  I used Ffmpeg, which used x264.  I encoded at 500kbps.  Given the 320x240 resolution and the 15 fps, I could have probably gone lower in the bps, but disk space isn't much of an issue anymore.  I converted the video from 3.8gb on 7 CDs to 2.3gb on a single Micro SD card.  Below is the command I ran to convert.  This is derived from research a buddy of mine did to get MP4 files that work on older iPhones.


/usr/bin/ffmpeg -loglevel warning -i "./1958 January to Summer.avi" -threads 2 -r 15 -vb 500k -vcodec libx264  -flags +loop -cmp +chroma -partitions +parti4x4+partp8x8+partb8x8 -me_method umh -subq 5 -trellis 1 -refs 1 -coder 0 -me_range 16 -g 300 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71  -maxrate 10M -bufsize 10M  -rc_eq "blurCplx^(1-qComp)" -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -level 30 -pass 1 -y "./1958 January to Summer.mp4"
/usr/bin/ffmpeg -loglevel warning -i "./1958 January to Summer.avi" -threads 2 -r 15 -vb 500k -vcodec libx264  -flags +loop -cmp +chroma -partitions +parti4x4+partp8x8+partb8x8 -me_method umh -subq 5 -trellis 1 -refs 1 -coder 0 -me_range 16 -g 300 -keyint_min 25 -sc_threshold 40 -i_qfactor 0.71  -maxrate 10M -bufsize 10M  -rc_eq "blurCplx^(1-qComp)" -qcomp 0.6 -qmin 10 -qmax 51 -qdiff 4 -level 30 -pass 2 -y "./1958 January to Summer.mp4"

Friday, December 21, 2012

Converting old videos, Part 1

Between 1958 and 1974, my grandfather made home movies with an 8MM camera.  For Christmas 1996, my dad and I decided to convert those old 8MM films to a digital format to give to each family member.  Our plan was to burn the videos on to CD-R's to give to the family.  Although the DVD specification was created a year before, DVD-Video's didn't come to the US until 1997.  We were a bit a head of our time.

Our setup was to put to tape a white piece of paper on a wall.  We set up the 8MM projector to project onto the paper.  We pointed our video camera (one of the big ones that you put a full VHS tape into) at the paper.  Our computer had a Hauppage video capture card hooked up to the camera using an RCA cable.  The computer had a state of the art Pentium 150Mhz processor in it.

Although it was a lot of horse power for the day, real time encoding of video was a problematic task.  I did a lot of test recordings using various software and codecs to find the right mix that would allow us to encode in real time.  We settled on R Bartick's RBCap using Intel Indeo 5.  We encoded 320x240 video at 15 fps using 1101.0 kbps in yuv410p colorspace.

We encoded 22 movies into 3.8gb of data, split into 7 CD-Rs.  This was a time when blank CD-R's were $1.50, and came with jewel cases.  We got a discount for buying 100 blanks for $100.  We had to make 7 sets so it was worth it.  At this time, CD-R drives were 1x, and were very sensitive.  The screensaver going on was enough of a CPU spike do disrupt the burning process and turn the CD into a coaster.

This was an amazing feat for us.

Thursday, December 20, 2012

MK802: Video performance and Youtube issues

In a lot of previous posts, I have talked about having video performance problems and Youtube connection issues.  There was some evidence that suggested that both were related to having overall network issues on my home network.  It seems like some video formats could not use the hardware decoder if there was streaming issues.  Also, Youtube seemed to buffer and hang a lot, on both HD and SD modes.

While traveling for the holiday's, I brought my MK802 with me.  In a previous post, I talked about how my family really enjoyed the technology.  This event did give me an opportunity to use the device in another network to see what issues occur or disappear.

As far as video performance, AVI/XVID files on the SD card were still not using the hardware decoder in MX Player.  That didn't seam to be an issue, though.  The software decoder was able to handle the load.  MX Player still could not play HD content, though.  I had to use the bundled HD Player.  This made it a bit awkward, because depending on the type of format I was about to play, I had to switch between video players.  This caused people to be a little confused, but as long as I was the one that was controlling the device, everyone was fine with it.  All local videos did play fine, however.

Next, I fired up the Youtube app.  I tried playing an HD video, but it wouldn't play.  The buffering progress bar wasn't moving.  I switched it to SD and the buffering bar started to move, but it still took 30 seconds before a video started.  This was worse when a commercial would play because it took 30 seconds for the commercial to buffer, then another 30 seconds of silence for the video to buffer.  This could an issue with the Wifi at the house, though.  The WiFi signal was kind of low there.  I fired up Wifi Analyzer on my phone and discovered that 4 WiFi networks were sharing channel 6.  There was a 5th network on channel 7, but none above that.  There were 3 networks below channel 5 but their signal strength made it seem like they were farther away.  The local WiFi was strong enough that I could stream Pandora with no delay, however.

This leads me to believe their might be something wrong with the networking on the MK802.  My buddy has an MK802 and he doesn't seem to have the problems I do, but he hasn't done as much as I have with the device.  I'm waiting for the opportunity to test it on another network.

Wednesday, December 19, 2012

MK802 on the road

I went over to a family member's house for a gathering.  While there, I talked about my MK802 and how interesting the technology is.  Two family members immediately expressed interested in buying their own devices (especially when they found out it was less than 100$), but I warned them that it is the bleeding edge of technology right now.  I warned them that I am an early adopter of this technology.

I decided that I would bring my MK802 with me the next day.  I loaded it up a Micro SD card with home videos and old time Christmas specials.  When I got there, I hooked up the HDMI cable and the Android screen was up.  I started playing some of the home videos on the device just to demo them.  After getting the WiFi password, I fired up the Pandora app to play Christmas music.  After more family members arrived, I started playing the home videos.  We went back and forth between music and videos for a while, then ended with the Youtube app.  My family is very science-oriented, so I wanted to show some of the latest science videos.

The device was a huge hit.  The flexibility to move from one function to another was great.  At one point, we paused a video to play music.  A family member expressed concern over being able to continue watching it.  MX Player remembers where you are when we last played a video, so we were able to continue the video right where we left off.  Everyone was very impressed.  For an early adopter like me, it was well worth it.  The only thing that would have made it better would have been Skype video calling support for the family members that couldn't be there.

Tuesday, December 18, 2012

Super DVD Security

I pay for Netflix.  I get the streaming service as well as getting one DVD in the mail.  My wife wanted to watch Disney's Brave.  That isn't an unreasonable request.  I put Brave on the DVD queue and I got the DVD in the mail.  Netflix did their part.  Next is our part.  We inserted the DVD into our computer to play the movie in our living room.  The DVD didn't work.  Now, the computer is Linux, and DRM has always messed with Linux.  I understand...the media industry doesn't support Linux.  I try passing through the entire DVD to my Windows XP virtual machine.  Both Windows Media Player and VLC cannot play the DVD.  Ok, maybe its the entire "VM" thing.  I can't expect every piece of software to run in such a weird setup.  Since my wife is the one that wanted to watch it, I told her she should just watch it on her work laptop.  She has a new Dell laptop that has Windows 7 Professional on it.  She pops it in and Windows Media Player gives her the same error that Windows XP gave me: cannot turn on analog copy protection.  Disney made a DVD so secure, that it cannot even be watched!  I have decided that every time Netflix sends me a DVD that won't play on one of my computers that I will notify them that the DVD is damaged.  In the mean time, here is Wil Wheaton talking about DRM.  Enjoy!




Monday, December 17, 2012

IOGear USB HDMI Adapter

My VM server had an AMD HD 6570 that passed through to Windows XP.  That worked for a while, but I started to have issues.  The card started to display horizontal lines and it wasn't fast enough to bump of the quality in some games.  I decided to upgrade to an AMD HD 7870 (Ghz edition).  I figured I could use the new card for Windows XP, where I do my gaming, and the old card for the Windows 7 VM.  I use the Windows 7 VM for upgrading our phones and other things that mess around with drivers.  There was not enough physical room in the computer for the 3rd video card, however.  That is when I decided to buy the IOGear USB HDMI Adapter.  The Windows 7 VM has a PCI Express AS Media ASM1042 USB3 controller passed through to it.

I plugged in the adapter and Windws 7 recognized it.  It listed it as 3 separate but related USB devices.  It green checks for two and a red X for one.  For a fancy device like this, I can't expect Windows 7 to have the drivers already.  I popped in the drivers cd and installed the software.  After a reboot, Windows 7 had three green checks.  I still wasn't able to use it, though.  The Display->Screen Resolution screen did not show a second screen.  After a reboot, I got a popup telling me that there was a driver update for the HDMI Adapter.  I clicked ok to update and had to reboot again.  After that reboot, I got a popup telling me the drivers were not supported on my computer and that I should update the drivers.  The device still doesn't work.

Running inside of a VM causes issues.  If you can get PCI passthrough working for a USB controller, then most USB devices will work in the VM.  Some software tends to have issues inside of a VM as well.  This means more obscure USB hardware has two things working against it from working inside of a VM.  1) The USB device itself might not like being passed through to a VM and 2) The required software for the device might not work correctly inside of a VM.  I don't know if one of these two scenarios is what is causing the device to not work.  I can try on another computer, but I have very few real Windows computers.  I tend to uninstall Windows on any computers I buy and use their licenses in the VMs.

Friday, December 14, 2012

Current state of N64 Emulators

I have been using Mupen64plus for years to emulate the N64.  It is the best emulator for Linux.  The user interface left much to be desired and I found parts of the experience frustrating.  It also didn't emulate all games properly.  The project is still active and trying to improve itself.

After making a Windows VM with VGA Passthrough, I decided to give the Windows emulators a try.  I had purchased USB-to-N64 adapters to let me use my original N64 controllers.  These devices worked better in Windows.  Mupen64plus worked better but still had issues in Windows.  I decided to give Project64 a try.

Project64 is a free Windows-only N64 emulator.  The interface is much more user friendly and supports far more games.  As a whole, the product is better than Mupen64plus.  In 2005, the development team decided to rewrite the core emulation engine.  The 1.7 release has been in development since then.  You cannot download the 1.7 version however.  You must donate money to download it.  If it were 2006, I might consider donating money to promote the development of the next version.  Its not though.  It is almost 2013. The main website gives no indication that 1.7 will be delivered faster and better if I donated.  The last update is a year old.  The website still states that as of 2008, they hope to deliver the 1.7 release by 2010.

Although classified as an open source project, the recent source isn't available.  If it were, there is an opportunity to fork the project, if anyone was interested.  Mupen64plus is actually a fork of Mupen64 to add 64bit recompilation support.  I couldn't find the source for a version later than 1.4 of Project64.  You have to dig into the forums to learn that the community is actually still active.  In August of this year, a Google Code site was set up that posted actual source code with issues lists.

At this point, 1.7 is still listed as a beta.  There is no information on the website that tells me what the current advantages are of 1.7 over 1.6.  Is it faster?  Does it support more games?  Is it more accurate?  Does it support network play?  These are the indicators that would tell me to donate money so that I can try out 1.7, and contribute to the community.  From my point of view, Project64 1.6 is still the best N64 emulator available.  I am looking forward to the 1.7 release, but I am not very optimistic.

Thursday, December 13, 2012

Smart TV vs Dumb TV

I see a lot of coverage for "Smart TVs".  These TV's have built in support for streaming media from Youtube, Pandora, Netflix and Hulu Plus.  Although I do pay for Pandora and Netflix, I do realize that those companies might not last forever.  What happens to your TV if Netflix goes out of business?  What happens if a new service comes out that is better than Pandora?  If you are lucky, you can update your TV's software.  If you are not, then you have to buy a new TV.

In the programming world, this is called "tight coupling".  In system administration, we sometimes call it "direct integration".  In programming, this is considered bad, while in the sysadmin world, it depends on who you talk to.  There should be an abstraction layer between your content provider and your content player.  Imagine if you had to buy a new TV if you switch from Comcat and Verizon.  There is a protocol between the TV and the provider.  That protocol has changed over time, but the current one is HDMI.

If you want "Smart" features on your TV, I highly recommend buying a device that will provide those features to you.  If you want to upgrade your TV, you can do that.  If a new service comes out that isn't supported on your device, you can buy a different one that does support it.  If the device dies, you can buy a new one.  If the TV dies, you can buy a new one.

As for which device to get, that depends.  The Apple TV and Google TV are the new comers that are promising, but not fully developed.  The Roku is getting really cheap and supports many services.  It is not worth spending the extra money to get a "Smart" TV when you can buy a device that turns a "Dumb" TV into a "Smart" TV for less than the price premium.

Wednesday, December 12, 2012

Running loops

I am one of those people that loves the command line.  I like having the ability to write a script that allows me to execute something in a loop.  Some programs are not very loop friendly.  Here are some examples:

SSH

As a system administrator, you find yourself needing to execute a command on a bunch of servers.  If you start out with a simple for loop with an ssh inside, you notice something weird.  Only the first loop executed.  It took me a while to figure out how to overcome this problem.  The ssh program is a special program.  It does special things with stdin to transfer control of it to the remote server.  The problem with that is it consumes all of stdin inside of the loop.  This means it consumes the contents of the for loop.  I don't fully understand what is going on under the hood, but the for loop uses stdin.  After ssh consumes stdin, there is no more entries to loop over.  The simple solution is to echo into ssh.  This gives ssh its own stdin to consume.

# Loop over hosts
for x in server1 server1
do
  # Give ssh its own stdin
  echo | ssh server1 hostname
done

Mplayer

Recently, I have been trying to convert my AVI files to MP4 files.  I have been matching bitrates between the 2 containers/codecs.  I usually use mplayer's identity flag to detect the bitrate of the source file.  That worked when called once, but inside of a loop, it caused problems.  Mplayer was corrupting stdin.  I tried the echo trick, but that didn't work.  Apparently, when you echo into mplayer, it acts differently.  Instead of identifying the input file, it identified the input stream in stdin.  When you are just echo'ing into mplayer, it just told me that it get an EOF.  I couldn't get passed this problem.  To get passed this problem, I ran ffmpeg against the input file, without giving an output file.  The program errored out, but it gave the bitrate of the old filename in the error message.  This is not repeatable, so I know this script won't last the test of time.

FFMpeg

I am using ffmpeg to convert from AVI to MP4.  An interesting thing happened when executing ffmpeg (with an output specified) in a loop.  FFMpeg displayed less output, but the x264 command it was running dumped the contents of the output file in hex format to stdout.  The conversion was still happening, but GNU Screen was taking up 50% of one of my cpu's just to handle the stdout.  I am sure the conversion was happening slower than normal as well.  I couldn't get passed this problem either.  I decided to convert my script to a bunch of echo's.  I echo'ed the ffmpeg command instead of executing it.  I redirected the output of the script to a file.  Now I have a huge Bourne shell script that I could execute.  That generated script contains no loops, so it runs just fine.

Tuesday, December 11, 2012

Android support for multiple display sizes

Most Android apps do not seem to scale to larger displays very well.  Some icons seem to remain the same size.  When I run the apps on my TV, I end up with a lot of dead space and hard to click icons.  The air mouse works great for apps that scale well, but works horribly for apps that do not.  I end up missing a lot and clicking the wrong buttons.  I spend too much time trying to turn shuffle on in a music player.  I complained about fullscreen mode in the Youtube app in a previous post.  I am starting to learn how to write Android apps, and I'm making sure to scale the content so that the interface is usable on small and large screens.  The API call I'm using Display.getSize(Point) specifically says not to use that size for scaling purposes, but I have not found another API call that will allow me to do that yet.  The documentation says to use Layouts.  I have not found a way to scale using Layouts.  I still use Layouts for positioning, however.

Monday, December 10, 2012

API Website

I found a really awesome website that fits my tastes perfectly.  www.programmableweb.com is a website that lists the various API's that are available for various websites.  Some of them are unofficial API's.  One of my specialties is integrating systems.  I plan on using this website a lot in the future.  I have some projects that I'm working on that will benefit greatly from it.

Friday, December 7, 2012

MK802: Netflix update

When I first posted about Netflix on the MK802, I only did a short test.  The video seemed find and I didn't browse the menu's too much.  My wife started using Netflix to watch some Christmas movies, and she started complaining.  Since I first tried the app, the video quality in the app seems to have degraded pretty significantly.  Also, the interface itself is really nice.  Too nice.  It looks so nice that it ran very slow.  Although usable, you need a lot of patience.  My wife reverted back to using the Wii to watch Netflix.

Thursday, December 6, 2012

Youtube: Full Screen and Thumbs Down

I have been using the Android Youtube app on my MK802.  By default, the video only uses the top-left 1/4 of the screen.  The botton-left 1/4 is for comments, and the right 1/2 is for related videos.  On a TV want to watch the video in fullscreen.  There is a fullscreen button, but there is a slight problem.  The button is right above the thumbs down button.  Both buttons are kind of small.  The fullscreen button only appears if you click on the video.  It also fades away after a few seconds.  That means you have to be quick about hitting the button.  On a touch screen, this is not a big deal.  When you are using a mouse (or an air mouse), it can be a little difficult.  On a few occasions, I accidentally clicked the thumbs down button.  Every time it happened, I felt so bad that I clicked the thumbs up button to cancel it out.  I wish there was a better way to make the video fullscreen.

Wednesday, December 5, 2012

MK802: iHeartRadio

In a previous post, I mentioned that the iHeardRadio app was not available for the MK802.  I decided to fire up Bluetooth App Sender and transfer the app from my Nexus 7 to my Dropbox.  I downloaded the APK file from my Dropbox and installed the app onto the MK802.  It worked like a charm.  The iHeardRadio app works on the MK802.  It is just not available on the Play store.

Tuesday, December 4, 2012

Mark and Sweep Optimization: Copying collector

In the mark-and-sweep algorithm, the sweep part of the algorithm defragments the heap by moving live objects next to each other.  This sliding of the objects requires a lot of tracking.  If your heap is small enough, it is more efficient to just copy all live objects to a new location instead of reusing your current heap.

This is what a copying collector does.  Take your heap and split it in half.  You use one half as your real heap.  When you GC, you move all live objects into the other half of the heap.  You now start using the other half of the heap.  Every GC cycle you switch which half you use.

The main disadvantage of a copying collector is the amount of memory it uses.  It essencially doubles how much heap space you need.  This is where the generational garbage collector comes in handy.  You can use the copying collector on the young generation since it is small.  This makes the frequent GC cycles that much faster.

Monday, December 3, 2012

AVI Streaming

The AVI video container technically does not support streaming.  The great video player Mplayer supported streaming AVI over HTTP, however.  It starting playing AVI files immediately, and supported seeking in the AVI file.  Since I use Linux/Mplayer, I was able to keep most of my videos in AVI format.  After setting up the Android TV, I started having issues.  I use the popular Android video player, MX Player.  When I play an AVI file, it takes about 30 seconds to start.  After it starts, it uses the S/W decoder so the video is a little choppy.  I can seek in the video, though.

I started to research alternative video containers.  H264 encoded MP4 files started immediately in the bundled HD video player.  It also used the hardware decoder, so there is no choppiness.  I would have to transcode all my AVI's to MP4 files, which can take a long time.  My buddy gave me his script for converting files to MP4.  He used it to convert the video's he recorded to work on his iPhone.

The next format I tried was MKV.  The last time I tried using an MKV file, Mplayer would not seek into the file.  I found that kind of ironic.  For the current test, I used mkvmerge to convert a file from the AVI container to the MKV container.  MX Player started the video with no delay.  It also used the H/W decoder!  I thought this was the way to go, so I started converting my videos to MKV.  A few gigs later, I found a problem.  I forgot to test seeking!  MX Player does not support seeking in MKV files that are streaming over HTTP, just like Mplayer.  Mplayer just ignored seek commands, while MX Player tries to seek, fails, and just sits at a black screen forever.

I still have a few things to try, but I my end up converting my entire video collection to MP4.  I do have the horsepower to do it.

Friday, November 30, 2012

When do you need fast video cards?

I have been building computers for almost 20 years.  Because of this experience, people have been asking me about what video cards they should get.  Video cards tend to be the single most expensive component in a computer and there is a very competitive market for then.  I tend to tell them to get the cheapest video card they can find and they never listen to me.  They always get the most expensive video card in the range they want to spend.

Why do I tell them to get a cheaper card?  Mostly because I know they don't use any 3D tools that use the functionality of the card.  3D?  What does 3D have to do with a video card?  A little history.  You used to have two cards in your system for video.  One was your standard VGA card.  The other was a 3D accelerator card.  The 3D accelerator cards exposed a 3D API that programmers could use to make 3D applications faster.  These 3D cards had special chips (GPU) that performed very fast 3D operations.  At the time, the only applications that used the 3D API were high end games and high end graphic workstations.

Over time, the companies that made the 3D accelerator cards decided to merge the 2D VGA card with the 3D accelerator card.  Since every 2D card is almost the same, the manufacturers marked the 3D features.  A 3D arms race ensued and the 3D parts of the cards got better and faster and contained more features.  For many years, there were still only two types of applications that used those fancy (and expensive) features: 3D games and high end graphic workstations.

This is where myth #1 was used by people who wanted high end graphics cards.  They heard that the 3D features were used by high end graphics workstations.  They didn't know what high end meant, but nothing is higher end than Adobe Photoshop, right?  Wrong.  Photoshop deals with a particular branch of graphics.  Specifically, manipulation of 2D raster images.  People are the most familiar with Photoshop because to a non-technical person, manipulating a 2D raster image is what graphics is.  In the history section, we learned that the expensive part of the video card is for doing 3D, though.  What is a high end graphics program that uses 3D?  RenderMan.  For those of you not familiar with RenderMan, it is the high end graphics program written by Pixar to render all the Pixar movies, Titanic, Lord of the Rings, the Star Wars prequels, and most major movies with awesome special effects.  RenderMan allows you to view a lower resolution frame of the movie in 3D in real time.  The graphic artist can manipulate the 3D scene for the movie.  RenderMan then renders the full scene in a much higher resolution.

As time went on, people started realizing that the GPU was far better at some math than a typical CPU.  GPU manufacturers added a new API so that programmers could start using the GPU for things other than real time 3D rendering.  People started using the GPU for massive public projects, like decoding the human genome or searching for extra terrestrial life.  Some used it to accelerate delayed 3D rendering.  Enter myth #2: I do video editing, so I need a fast GPU.  This stems from the fact that programs like RenderMan started using the GPU for the rendering part of the movie creation.  Once again, they are doing 3D movies.  Home videos are not rendered, they are captured.  Although video programs like Adobe Premier do allow you to use the GPU to speed up the encoding/compression phase of making your video, it is not worth the money unless you are making movies professionally.  If you make one movie a month, save the money.

Unless you are playing cutting edge video games, the main thing to consider is resolution.  Modern operating systems run in 3D mode to give you fancy eye candy.  This means if you want a high resolution, you need a decent video card with enough DRAM to handle the resolution.  The faster your card, the smoother your eye candy will be.  For video games, base your selection on the specific video game that you want to play.  Most of the 3D games I play are over 5 years old.

Thursday, November 29, 2012

Less features are often more

Our society is crazy for features.  When you want a device, you wan't the most features.  You tend to pay more for the devices with more features, since it costs more to make a device with that many features.  The truth is, we tend to not use those features.  By knowing what features you want, and what ones you don't care about, you can actually save a lot of money.  It can also cost you a lot of money if it is a really nice feature.

Once of the best examples of this is an e-book reader.  When the Kindle first came out, it had a black and white e-ink display.  This e-ink display used very little battery.  Therefore, your device ran a really long time on a single charge.  The problem was the device was back and white, though.  Enter the iPad, Kindle Fire and Nook Color.  These are not e-book readers.  They are tablets that can be used as an e-book reader.  They have far more features, are color and are far more powerful.  They tend to be "better" in every way, except for battery life.  Tablets screens require power to maintain their display, while e-ink does not.  If you take a minute to read a page, a tablet has been draining the battery for a whole minute while the e-book reader only drained the battery for the second it took to turn the page.

This is an example where less is more.  By having a black-and-white e-ink screen, the battery lasts a lot longer.  If you are reading a book, you don't care about color, or the slow refresh rate.  You also enjoy the fact that you can read your book outside!

By concentrating on the features you need (I should be able to read a book) and not on the features you don't need (I don't need to play games or watch a video), you can save yourself a lot of money.  How much money?  The Txtr Beagle is a new e-book reader that is coming out soon.  The expected price for it is $13.  That is not a typo.  Compare that with the $499 you would spend for the current top of the line iPad with retina display.  That is a 97.4% or $486 in savings.

How does a company get an e-book reader to cost that low?  By removing features.  First, one of the most expensive parts of an e-book reader is the battery.  Built in rechargeable Lithium Ion batteries are expensive.  You also have to ship a charger with the device for the end user to charge with.  With the Beagle, you just use AAA batteries.  No need to ship a usb or power cable.  The device can do that because it requires so little power, since it is a true e-book reader.  It uses e-ink and only allows you to read books.  When I told some people about this, they looked at me weird.  They look at it as being overly cheap, when I think it is genius.  It drastically lowers the cost.

The device does not contain WiFi or 3G.  You have to load books into it using a Bluetooth device.  The Beagle only contains 4GB of space for books.  For a true e-book reader, that is plenty of space.  E-books are small.  Its the music and videos that eat up space on a tablet.

This device does one thing and it does it well.  They innovated by removing features, not adding features.  The biggest feature of this device is the price.  While you get what you pay for, if you really are looking for a plain old e-book reader, then it is well worth the money.

Although this is a bit of an extreme example (I don't think I can find a 97% savings on anything else), you should figure out what features you want, and what features you don't need.  Less is more.





Wednesday, November 28, 2012

Mark and Sweep Optimization: Generations

One of the performance penalties of the mark-and-sweep algorithm is that it searches all active objects in your heap.  This means the larger your heap is, the longer the GC cycle takes.  One way to increase performance is to split your heap into generations.

Let's split the heap into three generations: young, middle and old.  All new objects go into the young generation.  When you fill up the young generation, you GC that generation only.  If an object survives enough GC cycles, then that object gets moved to the middle generation.  If that generation fills up, you GC that generation and move any longer lived objects into the next generation.

If you keep the heap size of this generation small, then the GC cycle is really fast.  In a typical Java application, most of your objects are GC'ed really fast because they are temporary objects.  Think of all the BigDecimal and String instances that you manipulate.  Those classes are final; every operation creates a new instance that can now be GC'ed in the young generation.

With a generational garbage collector, you perform more GC cycles, but since the young generation is small   each young generation GC cycle is a lot faster.  The longer GC cycles run far less frequently.

Tuesday, November 27, 2012

MK802: FoxFi

I use ethernet for my MK802.  This means I have an available WiFi card to play around with.  I decided to create a guest WiFi access point using FoxFi.  FoxFi is a tethering app for phones.  The goal was to "tether" the WiFi over ethernet.  Guests would get an easy password while my normal WiFi would have the longer, more cryptic password.  Unfortunately, FoxFi crashed on the MK802.  It probably has something to do with no 3G connection.  O well.

Monday, November 26, 2012

Android ethernet support

Android technically supports ethernet.  Most devices do not support ethernet though.  The only way to access your local network is over wifi on those devices.  I have the I/O Crest SY-ADA24005 USB 2.0 Ethernet Adapter connected to the MK802.  I have been playing around with various DLNA apps for Android.  A few of the apps won't run in my configuration.  They detect if wifi is enabled.  If it is not enabled, then it pops up a message telling you to enable wifi.  This feature can be user friendly for phones, but it means I can't use the app without disabling ethernet.  This can be annoying.

Friday, November 23, 2012

Forcing orientation in Android

Some apps want you to hold your phone in a particular way.  They want the screen to be tall instead of wide.  Although I understand only wanting to support one orientation, it makes it difficult to use the app on a device that does not physically rotate.  Specifically, I can't rotate my TV!  I find it very frustrating when I start up an app on my MK802 and the screen orientation changes.  That change automatically changes how the mouse works as well.  When I try to move the mouse down, the mouse actually moves to the right, since the right side of the screen is the bottom side of the app.  It becomes very difficult to close with the mouse.  Luckily, my Mele Air Mouse has back and arrow keys that allow me to exit most applications.

Thursday, November 22, 2012

Tracking HashMap resizes

In Java, the initial size of a HashMap can be important.  The HashMap size is a power of two and doubles every time the container size exceeds the load ratio (if your HashMap size is 16 with load ratio of 0.75, then after you add the 13th item, the HashMap grows to 32).  The HashMap resize operation is a pretty intensive operation.  It is recommended to minimize the number of resizes a HashMap will do.  The initial size turns out to be pretty important when considering the performance of your application.

One thing that is missing in Java's default HashMap implementation is the ability to track the resizes of a HashMap.  It would be nice to write a log message every time the HashMap resizes.  In this log message, I would like to see the old and new size, as well as the class/line number of the code that created the HashMap.  After reviewing the logs, you can get an idea of which HashMaps are not initially sized correctly. Unfortunatly, the HashMap.resize() method is default scoped, instead of protected scoped, so we can't override the method to add the log message.

Wednesday, November 21, 2012

MK802: TNT Streaming App

I got excited when I saw this article on GeekSugar about TV apps for your iPhone and Android devices.  I thought to myself, I have an MK802!  I went through the slides.  The apps can be put into two categories: TV Providers and Channels.  A lot of TV Providers, like Comcat and Time Warner have started to provide apps to stream TV.  You have to be a subscriber to take advantage of those services.  I have Verizon FiOS, which is NOT on the list of TV Providers that have a streaming app.  The next category are the channels that provide their content to a streaming app.  Most are premium channels, like HBO and Showtime, but TNT was on the list.  There are a few TNT shows that my wife and I watch.  I fired up the Play store on my MK802 and discovered that it wasn't available.  I decided to install the app onto my Nexus 7 and used Bluetooth App Sender to upload the APK file to my Dropbox account.  On the MK802, I installed the APK file off of Dropbox.  After starting the app, two things happened: 1) the screen rotated so all the content was sideways and 2) the app crashed.  O well.


Tuesday, November 20, 2012

Whole disk encryption

I hear a lot about the pros (and sometimes the cons) of disk encryption.  You hear about government laptops being lost or stolen and the question arises, why wasn't the hard disk encrypted.  You hear about accused criminals encrypting hard disks so that the prosecution can't get any evidence.  Users will encrypt their entire disk or a portion of it when storing tax or other personal information.  There seems to be some misunderstanding on how the technology works, and what it can and cannot do.

First, we need a little background on encryption.  There are two different types of encryption: 1-way and 2-way encryption.  In 1-way, data only flows in one direction.  You can only encrypt the information.  You can't decrypt it.  This is commonly called hashing.  This type of encryption might seem worthless (and it doesn't have much use in disk encryption), but it has a whole lot of useful purposes that are unrelated to disk encryption.  2-way encryption is the type that allows you to encrypt and decrypt your data.  2-way encryption is protected by a key.  In disk encryption, that key is usually a password, but not always.  In disk encryption, you use the key to encrypt the data and you use the key to decrypt the data.  The main thing to learn is that in order to read (decrypt) the encrypted data, you must enter a password/key. This becomes important when talking about whole disk encryption.

Disk encryption comes in to flavors: whole disk encryption and folder/file encryption.  In whole disk encryption, your entire disk is encrypted.  In folder/file encryption, only a section of your disk is encrypted.  Most media outlets tend to talk about whole disk encryption as the technology everyone should use.  They never mention any of the downsides.  They never say why it might be a good idea to use folder encryption.

Lets start with starting your computer.  If you have whole disk encryption, boot your computer.  If the technology is used correctly, it should prompt you for a password immediately, before Windows even boots.  If it did not, then the technology is flawed!  The problem here is the battle between security and user friendliness.  It is not very user friendly to force a user to enter in a password to turn on your computer, then enter a different password to log into your computer.  Some vendors try to get the best of both worlds by using the computer fingerprint as your key (this is why I made the distinction above about password vs key). Your computer has a unique set of hardware in it.  The disk encryption software can look at the hardware and generate a key that can be used for encrypting the hard disk.  This means the disk is encrypted and you don't have to enter in a password.  Seems nice, but two things should pop up in your head: 1) what if I change the hardware, but most importantly 2) what if someone steals the ENTIRE computer.  This encryption scheme doesn't help the government agencies who lost entire laptops of social security numbers.

Another problem that is often overlooked is performance.  Encrypting and decrypting data uses your CPU.  The more it uses your CPU, the less CPU is available for every other program that is running on your computer.  There are CPUs out there that are far more powerful than consumers need, but there is a growing trend to use power-effecient CPUs instead of power-hungry CPUs.  On top of that, encrypted data is larger.  Depending on the technology that is used, it could be 50% larger.  Although that eats up more disk space, the bigger problem is that you must transfer more data to memory to decrypt it before you can use it.  That means disk reads/writes are a lot slower, and the disk encryption software is consuming a chunk of your RAM.  Depending on what tasks you are performing, these penalties can be pretty significant.  The more of a power user you are, the more you will feel this pain.

Encryption is supposed to increase security, but there is one area that it doesn't even try to help: spyware/malware.  If your computer gets attacked by malware, and you use whole disk encryption, you have already encrypted the disk.  The malware has access to every file that it would have had access to if you didn't encrypt your hard disk.  There is no protection there.

Lets talk about folder encryption.  Most operating systems support this right out of the box.  In folder encryption, your computer boots just like it normally did before encryption.  You only get prompted for a password when you try to access an encrypted folder.  You can also have multiple encrypted folders, each with a different password.  Although this can get confusing, it can help segregate your important information. If someone steals your entire computer, your important folders are still encrypted.  The thief still gets your bookmarks or any data that you didn't encrypt, but the responsibility is on you to determine what is important enough to encrypt.

Since you are only encrypting your sensitive information, you do not suffer the performance penalty when going about your day-to-day activities.  You only suffer the problem when accessing your personal information.  For some of us, that is once a year when you do your taxes.  If your computer gets attacked, your folders are still encrypted.  It is a lot harder for the malware to steal the sensitive information (although still possible, it just makes it a lot harder).

I tend to hear security professionals compare computer security with bank security.  You can't have an absolutely secure system.  You have layers of security.  Whole disk encryption is an attempt to have absolute security.  Folder encryption is a layer.  You protect the information that is the most important to you.  You shouldn't be trying to encrypt all the day-to-day activities that you do (unless you are a business or a criminal).  For personal computers, protect the information that should be protected.

Monday, November 19, 2012

Simple Technology: Watering your Christmas tree

Technology tends to focus on creating new products using the most complex science of the day.  During the industrial revolution, many new inventions used steam to power them.  During the electronics age, everything we powered with tiny motors and relays.  In the computer age, every new "thing" either has a computer, is a computer, or runs on a computer.  Although I love the latest and greatest, I love it even more when something new comes out that uses older technology, and it works better than another other fancier device.

Christmas trees tend to consume a lot of water.  You tend to bend over a lot adding more water once or twice a day.  It can be hard to reach that far to add water.  What happens if you have to travel for the holidays?  Last year, I started to research a watering system for the Christmas tree.  I heard of funnel systems with tubes, but they were unsightly.  I saw electronic systems that notified you that the water level was low.  That doesn't exactly water the tree when it needs it, it just notified you when it needs to be watered.  The next thing I saw was a box with a tube.  The box looked like a Christmas present.  The tube went from the bottom of the box to the water reservoir of a standard Christmas tree stand.

The device was a Siphon pump.  You fill the box with water.  As the Christmas tree uses the water in the stand's reservoir, the box automatically fills the reservoir up to the top.  It is "smart" enough to NEVER overfill the reservoir!  The box essentially triples the effective reservoir size for the tree.  The fact that old technology can be so great seems seems so counter intuitive.  How can a device with no moving parts be better than a device with a speaker and moisture sensor?  How does a device with no sensor automatically know when to stop filling?  How does the water get pumped to the tree stand?

This device uses technology that is 3500 years old.  The box siphons water into the tree stand.  Once the pump is primed, the water level inside of the box matches the water level of the stand.  As you add more water to one, the siphoning action pumps the water to the other end.  If you remove water from one end, then the siphon moves water from the end that has more water to the end that has less water.  When you first set up the system, you add water until the water level in the tree stand is at its highest.  That tells you the max fill level in the box.  The tree uses the water in both reservoirs.  When it gets low, you fill the box back up to the fill level.

The beauty of this technology is that it fills a need perfectly.  It automatically waters the tree.  There is very little that can go wrong with the box.  Since it looks like a Christmas present, it is aesthetically pleasing.  Due to the simple design, it is cheap to produce.  It just works.




Friday, November 16, 2012

Locking out production ID's

An interesting debate pops up from time to time.  From a security stand point, it is generally a good idea to lock out an account if there are too many failed log in attempts.  This is done to prevent a dictionary or brute force attack.  You disable the account to prevent the password from being leaked.

Lets go into a data center now.  You have a website that connects to a database.  The account the website uses to connect to the database is protected with a username and password.  This begs the question, do you enforce the same lock out rules for this database account?

Based on the first paragraph, it seems obvious that for security purposes you should lock out the ID.  If you lock out the ID, then your website goes down!  That is called a denial of service attack.  It now becomes incredibly simple to lock out the ID and force a website to be down.  Conversely, you have to prevent brute force or dictionary attacks.

To wrap up, you might want to have a password lockout policy for database users for security reasons, but you may NOT want to have a password lockout policy for database users for security reasons.

Thursday, November 15, 2012

Mark-and-Sweep Garbage Collection

Mark-and-Sweep is an algorithm for garbage collection.  Lets start with the concept of a root.  A root is a starting point for searches.  Root's are usually static variables and stack variables.  Roots point to objects in the heap.  Those objects can then point to other objects on the heap.  As you create new objects, you keep track of how much memory you are using.  If, during your next allocation, you need more space than is available, you invoke the Mark-and-Sweep algorithm.

First, the world is stopped.  Then, the algorithm starts from the roots and traverses the object tree (imagine a depth-first-search through your objects).  Every time the algorithm visits an object, it "marks" it.  This usually means flagging a bit, but it could mean other things that I will get to later.  After marking the object, it continues searching for more objects to mark.  If an object is already marked, it does not traverse that leaf anymore, this allowing circular object references.

Once the mark phase is done, the sweep phase is initiated.  During the sweep phase, all objects that were previously marked are moved and rearranged so that all the objects are now sitting next to each other on the heap.  This sweep operation will just override any blocks of memory that were not "marked".  After the sweep phase, all objects that are no longer referenced are just gone.

Many critics of mark-and-sweep point out that the "stop the world" of the garbage collection can take a long time.  This can make an application "feel" unresponsive since the app is literally not doing anything during the GC cycle, because it would be bad to change pointers while doing a depth first search.  In older versions of Java, the GC cycle did consume a lot of time, especially for large apps.  Why would use use such an algorithm then?

The main advantage of mark-and-sweep is that it enables you to do other things faster that you can't do with other memory management solutions.  Specifically, allocating objects is really fast.  In most other memory management solutions, you have to search for an open spot in memory.  This is a linear search every time you allocate an object.  With mark-and-sweep, you maintain a pointer to the first free memory block.  To allocate memory, you just use the free memory pointer that you currently have.  You then save a new free memory pointer that is just after the block you just allocated.  That is constant time allocation!  The penalty is periodic GCs that are linear to the size of your heap.  Those GC's can be managed, however.  I'll talk about that in a future blog post.

Advantages

  1. Fast object/memory allocation
  2. Memory limits
    • Mark-and-sweep is implemented using malloc/free under the hood.  With this abstraction layer, you can actually set a limit on how much memory your process can use
  3. Locality
    • There are two important steps that increase locality
      • New objects are created next to each other in the heap
      • Objects that survive a GC are put next to each other on the heap
    • By having objects next to each other in the heap, you minimize cache misses
Disadvantages
  1. Long GC cycles give the appearance of slowness
    • This can be mitigated
  2. Excessive use of memory
    • Since GC cycles don't kick off until you run out of memory, you tend to use more memory
    • Heap sizes increase when you run out of memory.  If that increase was due to a temporary spike, your heap size never decreases
  3. Tends to be abused
    • Developers who have never needed to do their own memory management tend to abuse automated memory management systems, so they tend to use more memory

Wednesday, November 14, 2012

MK802: BS Player

MX Player doesn't allow you to control the network streaming buffer size.  Because of this, I decided to try out BS Player.  BS Player didn't have a Neon Arm V7 codec; they just have a regular Arm V7 codec.  Unfortunately, BS Player still won't play most of my video files fast enough, even on my 720P TV.  Back to MX Player.  I guess I have to wait for MX Player to get some more features.

Tuesday, November 13, 2012

WiFi Extenders

I have been looking for a WiFi extender that bridges over ethernet.  Many WiFi extenders are simply repeaters.  This means you connect to the repeater over WiFi, and the repeater connects to your WiFi router over WiFi.  Most consumers want this setup because you don't have to worry about running ethernet cable to the far ends of your house.  You buy a repeater to essentially boost your signal.  Repeaters don't increase bandwidth, though.

Imagine a simple setup with one WiFi router and two wireless devices.  If both devices are being used at the same time, then they have to split the bandwidth between the two.  They are sharing the same WiFi channel. Extend that to the repeater network layout.  You still have two devices, but they each have an access point.  The problem is the repeater acts like another wireless device.  In reality, your router has two devices sharing a channel and the repeater has a single device on it.  You still only have half the bandwidth.

With ethernet bridging, you have two access points.  You connect the access points to each other with high speed ethernet cable.  It is a bit over-simplified, but you can now have two devices, and each device gets the full 802.11a/b/n bandwidth.  When you have lots of wireless devices (3 laptops, 2 cell phones, 2 MK802s and one desktop), your devices can easily conflict with each other.  Things get worse when you start buying WiFi IP cameras.

I finally found a relatively cheap extender that allowed me to bridge over ethernet.  I bought it and started reading the instructions (imagine that!).  The instructions stated that bridging over ethernet was not recommended!  I tried it anyways.  Once I enabled the extender, the entire WiFi network came down.  I couldn't connect to either access point.

My second extender I tried was the Uspeed Wifi Repeater.  This device is designed to be a repeater.  It is designed to plug into an outlet in inconvenient locations.  It does support ethernet bridging though.  I set up the bridging and it started working.  It took less than 5 minutes to set up.  I now have two access points.


Monday, November 12, 2012

MediaTomb

I have been playing with DLNA to stream video to my MK802.  I started trying out various DLNA servers which lead me to MediaTomb.  MediaTomb is a great DLNA media server that supports storing information in a database and has a web interface for administration.  It was easy to set up and get running.  It was in the Gentoo Portage tree and came with a decent default configuration.  MediaTomb does hash all of your files, so if your files are on a network drive, it may take a while to scan everything.  I started using various DLNA media players on the MK802 while the scan was still running.  It seemed to work very well.  Then something weird started to happen.  The folder structure didn't match what I had.  I organize my files into folders.  I have folders inside of folders.  I always felt that was kind of the point of folders.  You can nest them!  What I noticed was MediaTomb flattened the folders.  When browsing, you get a list of folders, regardless of their parent or depth.  Instead of having a movies and a shows folder, I had a lot of folders in the root.  I couldn't navigate.

Friday, November 9, 2012

Mele Fly Sky Air Mouse

The Mele Fly Sky Air Mouse is a really nice device.  I have been using the remote for a few days on the MK802.  Being part mouse, part remote and part keyboard can get a little confusing for users.  One side of the Air Mouse is a full querty keyboard.  The other side is 1/2 mouse and 1/2 remote control.  When holding the device as a remote control, it is the perfect size.  It contains up/down/left/right buttons that you would find on a remote control.  The center button inside of the D-buttons is the mouse left click however.  Above the D-buttons are Enter and Back keys.  If you are using the D-buttons and want to select something, you have to use the Enter button, not the left mouse button.  I find it very counter intuitive, but you learn.

The Home button does not take you to the home screen.  The Settings button does open the settings menu, however.  The Volume up and down buttons to adjust the volume, but the Mute button does not Mute.  It is worth noting that almost every button that DOESN'T have a default action does get sent to the app.  In a future post, I will give out all the integer codes for the Air Mouse.  I was pleasantly surprised to find that the arrow keys worked in far more Android interfaces than I thought it would.  It is almost like the Android developers has some foresight that someone would want to use a D-pad to navigate.  This was nice since using the mouse took a little effort.

The mouse kind of acts like a WiiMote in the sense that waving the Air Mouse moves the mouse pointer.  It is not a pointer, though.  Just pointing to the icon you want to click won't work like it does on the WiiMote.  If I had to guess, I think the Air Mouse contains a gyroscope.  The gyroscope measures changes in pitch and yaw.  Yaw changes move the mouse left and right.  Pitch changes move the mouse up and down.  The hard part to get used to is the fact that the mouse moves based on the change of pitch and yaw, not the absolute value like a WiiMote.  For example, point a WiiMote toward the ground.  The tilt it up 45 degrees. Since you only tilted up 45 degrees, you are still mostly pointing toward the ground.  The Wii's mouse pointer won't move.  Now do the same thing with the Air Mouse.  The mouse pointer moves up!  It doesn't matter what direction the Air Mouse is pointing; the mouse will move based on the pitch and yaw changes.  It takes some getting used to, but I adapted pretty quick.  Another thing to note is altitude, roll and side-to-side changes do not impact the mouse pointer.

There are some pluses and minuses of this setup versus the WiiMote pointer style.  WiiMotes can get shaky when trying to select something.  The pointer drifts because most people can't keep their arms perfectly still.  For the Air Mouse, you will find that the mouse pointer doesn't have this shakiness.  If you are currently pointing at what you want to point at, it doesn't move away from it that often.  The Air Mouse only moves the mouse if you have changed the pitch or yaw enough.  This threshold is set frustratingly high, but I have a feeling it is set that way to prevent some of the jitters that the WiiMote suffers.  It does make it hard to select small buttons on the screen, however.

The keys on the keyboard were large enough to hit very easily, but the keyboard was a little too wide to type at a fast rate.  I think this was a design trade off with being a remote and a keyboard.  If they made the keyboard width smaller, then the remote wouldn't be as long and it might be a little awkward to hold as a remote control.  Because of that, I can't fault the designers for this.  I found the Shift and FN keys a little counter intuitive.  The Shift key acts as a standard Shift key.  It only makes things upper case if you hold down the Shift key while typing a letter key.  The FN key is a mode, however.  Every time you hit the FN key, it switches between the white and the orange keys.  This gets difficult when you are typing something that requires you to go back and forth between white and orange.  The FN key doesn't always work so you don't know which mode you are in until you type.  This can be deomonstrated by typing in a url.  You start with H, T, T, P in white, then Colon in orange, then Slash and Slash in white.

Overall, I am very happy with the device.  The one mouse has had slight problems that required a reboot (there is a small button to press on the side), but it hasn't happened often.  I have been able to navigate throughout most interfaces with ease.  I can switch between using it as a remote, a mouse and a keyboard to navigate, configure and use my Android TV.

Thursday, November 8, 2012

MK802: Accelerated MX Player

Update: The new MK802 III supports H/W decoding for all files

In a previous post, I mentioned how MX Player wouldn't use the H/W decoder in the MK802.  The Play store lists codec packs for MX Player but I didn't know which one to use.  On top of that, the codecs say not to install them unless MX Player specifically tells you to install it.  MX Player wasn't telling me which one.  Instead of trial and error, I decided to just try different players.  One of the  I tried was called aVia.  aVia Wouldn't use the hardware decoder, but a popup came up telling me to install the Arm V7 Neon codec.  I installed it and aVia started to use the hardware decoder!  A quick check revealed that MX Player had a codec with the same name.  MX Player started to use the hardware decoder.  The hardware decoder wasn't used for xvid files, but the software decoder was now fast enough to play the files perfectly fine on my 720P TV.  MX Player still won't play video well enough on my 1080 TV.

Wednesday, November 7, 2012

Reference Counting Garbage Collection

Reference Counting is a mechanism for garbage collection.  Every object gets a counter.  When you create the object, the counter starts at zero.  When the object is no longer in scope, the counter gets decremented.  When another object or scope gets a reference to the object, it increments the counter by one.  During the decrement phase, if the counter gets set to zero, then the object gets destroyed.  Because the object is getting destroyed, it decrements the counter for all the objects that it uses.  If any of those objects' counters reaches zero, then they are destroyed.  This chain reaction destroys all objects that are no longer used....or so you think.

Advantages:
  1. Objects are destroyed as soon as they fall out of scope.  This means the program does not consume more memory than the program needs
    • Heap sizes grow and shrink based on the programs need
  2. Garbage Collection runs in constant time, since you don't have to search for objects to destroy
    • No stop-the-world garbage collection cycles


Disadvantages
  1. Cyclic pointers are not supported.  Imaging the scenario where you have object A that points to object B.  Then, imaging object B points back to object A.  The counters for both objects will never reach zero since they will always have someone pointing to them (each other).  This is called a memory leak
    • History Note: This is why older versions of Internet Explorer had a tendency to consume all of your memory.  The DOM tree and the javascript object tree were managed with reference counting
  2. The counters require synchronization, so there is a performance penalty for all counter changes in a multi-threaded environment
  3. Allocating memory still requires searching for an empty spot large enough to hold your object, so allocating memory takes longer
  4. Because of reason #3, sequentially created objects tend to not be local to each other
    • Commonly referred as Swiss Cheese Memory!
    • Due to non-locality, there is an increase in cache misses



Tuesday, November 6, 2012

MK802: BubbleUPnP

Although BubbleUPnp seemed like a good DLNA app when I launched it, I quickly learned it was not for me.  The biggest problem I found right off the bad was the in-effectiveness of the Enter and Back buttons on my remote control.  I was able to use the Up and Down buttons to pick a folder, but when I hit the Enter button to go into the folder, it did not work.  The folder glowed for a second, and I was stuck there. I had to put the remote control into mouse mode and left click the folder.  Since this is critical for me, I decided to keep hitting the Back button until I was back into the Play store so that I could uninstall it.  Surprise number two was that the Back button would not let me exit the program.  It took me to the parent folder, but once I was on the app's home screen, I had to use the mouse to pull up the settings menu and select "Exit".  I have uninstalled the app.

Monday, November 5, 2012

USB Crash

One of the USB buses has crashed on my computer!  Devices can still get power, but nothing I plug in gets recognized by the computer.  I had 8 devices plugged in, but most of them were only for charging.  Only 3 devices were transmitting data.  Luckily two USB ports still work.

Friday, November 2, 2012

Missing a Terabyte

I'm a bit old school.  I am sad to say that I have been using fdisk for half of my life.  I can count on one hand how many times I have used a graphical program to partition a hard disk.  Imagine my surprise when I realized my 3TB hard disk only had 2TB free!  It turns out MBR partition tables don't support hard disks larger than 2TB.  To partition a bigger hard disk, you have to use a GUID Partition Table (GPT).  According to the Wikipedia article, fdisk doesn't support GPT.  I guess I have to learn a new partition tool.

Thursday, November 1, 2012

What is garbage collection?

When the iPhone 5 came out, there was a resurgence of the iOS vs Android debate.  In one of the comment sections of an article (I really have to stop reading the comments), an iPhone user argued that iOS is better because the Objective-C programs written use Automatic Reference Counting (ARC) instead of the "Garbage Collection" that Android uses.  As a Computer Scientist, my head almost blew up.

I know what the person was talking about, but it irritates me when it is used in arguments.  It may be nitpicking, but ARC is GC.  Garbage Collection means the developer doesn't have to explicitly delete any memory that they create.  It does not imply anything more than that.  Reference Counting is one implementation of Garbage Collection.  Mark-and-Sweep is another implementation.  The person was trying to argue ARC vs Mark-and-Sweep, but didn't know the term Mark-and-Sweep, or Generational Garbage Collection.  It was obvious the person didn't understand the changes in Java's Garbage Collection implementation since Java 1.1.  That is not unreasonable.  When teaching Garbage Collection algorithms, many courses stop at basic stop-the-world Mark-and-Sweep.  All the advances in Java's GC implementation are variations of Mark-and-Sweep.

My main problem with the comments in this thread was the person claimed ARC had no negatives while "Garbage Collection" had no positives.  The comments sounded like they came from a person that never had any formal Computer Science training and has never programmed for anything other an iOS.  In future posts, I will talk about some of the misconceptions around Garbage Collection and go over the long lists of pros and cons for each implementation.

Wednesday, October 31, 2012

Sharepoint wiki

I really dislike the Sharepoint built in wiki.  I find the WYSIWYG annoying.  When I try to highlight a block of text, it takes a few seconds for something to happen.  The background color of the text doesn't change though.  You just notice that the cursor position has changed.  If you hit Control-C to copy, then the background color changes.  When pasting the text, the page seems to randomly decided if it should put a carriage return.  If I put the cursor at the end of a line, the cursor disappears after a second and magically reappears at the beginning of the next line.  The arrow keys only work about 70% of the time.  It is very frustrating.

Tuesday, October 30, 2012

Maven's single-threaded pipeline

Maven has a single threaded pipeline.  When you tell maven to run the test phase, maven will run the compile phase then the test phase.  There is only one test phase.  It doesn't matter how many types of tests you want to do.  I tend to write unit and integration tests.  There are also static analysis tools that take a while to run.  With a single threaded pipeline, all of these tests run sequentially.  If any one of these steps fails, the entire "build" fails.  This is what a lot of people want.  They want immediate feedback about problems.

As your code base gets more complicated, so do your unit and integration tests.  Then, someone decides it would be a great idea to integrate a real application server into your build process.  Now, your war file gets deployed to a Tomcat server.  You wait for the Tomcat server to start, then you run a suite of REST calls, or maybe a browser simulator.  All of this happens as part of the build process.  If a webpage fails to render, you want to no as fast as possible.  With all of your tests running sequentially, your build now takes 3 hours.  3 hours is not immediate feedback.

I'm sure the maven people can come up with some complicated way of having multiple pom files and jenkins jobs that kick each other off.  To me, the right answer is to have a multi-threaded build pipeline that supports distributed phases.  In my opinion, the first build phase should only build your code.  If the build succeeds, that code should be uploaded to a code repository.  If any testing is done during the build phase, it should be unit tests only.  There should not be any integration tests or static analysis tests.  If the code will not build and is not self-consistent, then it is useless.  If it does build and is self-consistent, then upload it to the code repository.  If you use maven, this might not sit well with you.  Once you upload the artifact to the code repository, every artifact that depends on that version is now using it because the -SNAPSHOT pointer will return the code you just uploaded.  We also established already that locked snapshots don't work very well. Why would you upload the untested code to your artifact repository?  At the bottom of the snapshot vs milestone article, I talk about how milestones should be pointers.

First, the newly created artifact should have a pointer name similar to -LATEST_BUILT.  This pointer will point to the latest code that has build successfully.  Once that artifact is uploaded, the unique name should be put into a bunch of queues.  There should be one queue for each type of parallel testing that you want performed.  You can have a queue for integration tests.  You can have a queue for static analysis tools.  You can have a queue for testing your application within Tomcat.  At the end of each successful run of a sub-build task, a new pointer is created/updated.  For integration tests, its -LASTEST_INT_TESTED.  For static analysis tools, it can be -LATEST_ANALYZED.  If any sub-build task fails, a pointer isn't created and someone is notified.  The build flow defines a rule of what an overall successful build requires.  You can define success as requiring every sub-build task to complete.  You can split sub-build tasks into two categories: the ones that are required for an overall success, and the ones that are optional for an overall success.

This setup has a few advantages over the single threaded pipeline.  First, integration tests and static analysis tools can now run in parallel.  Second, you can fire off the next layer of integration tests while you are still fixing a static analysis tool issue.  Forgetting to remove an unused import shouldn't cut short your rounds of integration tests.  You can rebuild and integration test your war file while fixing your imports.  Third, your build pipeline can now support multiple platforms.  If you are writing a desktop Java application, you can have daemons running on Windows, Linux and Mac that run a test suite on each platform.  If you are writing an Android application, you can have a bunch of queues that simulate each phone.  Even better, you can have a lab of phones plugged into computers.  Each time you run a build, the test suite is executed on the actual phones!

Supporting multiple build tasks that run independent of each other could open up a lot of possibilities.  A setup like this could facilitate having a much larger automated test suite.  You could run more tests in less time.

Monday, October 29, 2012

Bluetooth headphones

My wife and I have been having issues with the ear buds that came with our phones while working out.  We have other ear buds but those get annoying as well.  They work fine when walking or hiking but they get caught when doing other activities.  We both wanted something wireless, but most of the Bluetooth headphones out there were more expensive than we wanted to pay.  Then I found the Motorola S305 Bluetooth Stereo Headset.  At $36, this was a great deal.  The headset has a good set of features.  I only wanted them for music, but they do support making and receiving phone calls.  All the controls are on the right ear piece.  The face of the ear piece has buttons for phone calls and skipping songs.  The outside of the ear piece has the power and volume buttons.  I haven't tried to make or receive any phone calls.  I was able to go to the gym without any problems.  Going on a hike or walking around the block worked out really well.  Overall, I highly recommend them.  They are great products for a low price.


Friday, October 26, 2012

Running out of temp space

There has been some confusion about the /tmp filesystem in Solaris.  The confusion stems from the fact that the filesystem is listed as 'swap'.  The fun starts when you run out of virtual memory.  The problem is a lot of people think swap and virtual memory are the same thing.

When you run out of virtual memory, then try to write into /tmp, you get an error message saying the filesystem is full.  Some error message are not very clear.  This error message is completely clear, but completely wrong.  The problem is less technical people will insist they know the real solution.  They think we should increase the size of the /tmp filesystem.  It took a lot of arguing to convince people that we didn't need to increase the size of /tmp.  The next thing they wanted to do was buy more ram, but that was a whole other argument for another day.

Thursday, October 25, 2012

MK802: Youtube network issues

My wife and I have started to watch a lot of Youtube shows.  While most episodes are short, others are longer.  CrashCourse episodes are over 10 minutes long.  TableTop episodes are over 1/2 hour long.  We watch the episodes on various devices.  The episodes view fine on my laptop on Google Chrome.  My Google Nexus 7 plays the videos just fine (notice a trend with the vendor?).  The Youtube app on the MK802 is having connectivity issues, though.  The app will download 1 to 2 minutes to the buffer, then it will just stop.  You won't notice it until the video hits that mark.  Then you get the spinning circle forever.  I have found that switching from HD to SD tells the Youtube app to redownload the video, but starting from where the connection was lost.  Then, 1 to 2 minutes later, it happens again.  I switch back to HD and start over.  It is a pain.  This happens with both of my MK802's, so I know it isn't a problem with that particular device.

Wednesday, October 24, 2012

MK802: MX Player will only play in S/W Decoder mode

I started trying to use MX Player on a wide variety of video formats.  I started to notice that playing some of the videos performed pretty badly.  That is when I noticed that MX Player was in S/W mode.  It would not run in H/W mode.  I even tried H/W+ mode.  Instead of a random sampling, I decided to try out some of the videos that we tend to watch the most frequently.  Some of those performed so badly that they were not even watchable.  The Youtube seemed to have no problem playing any video, so I know the hardware decoder can be used.  I tried other apps like VPlayer, Mobo Player, BS Player, ES Video Player and a few others.  All seemed to have the same performance characteristics of MX Player.

Tuesday, October 23, 2012

Searching for an outside security camera

I want to build a security system, but I am having trouble finding the right hardware.  I plan on using Motion to only record video when there is movement outside.  I prefer an IP-based camera instead of using a capture card.  Capture cards limit the number of cameras that you can use.  With IP-cameras, I can buy my cameras over time and add them to the system (who can afford buying them all at once).

Most of the home cameras that I have seen use an AC to DC adapter.  If I mount the camera under the overhand of my roof, then I need to put power somewhere.  Normally, you would run power inside of the attic, similar to an outdoor light.  If you do that with one of these cameras, then you are putting the AC to DC adapter inside the attic.  Attics are hot.  When AC to DC adapters get too hot, they tend to explode.  Explosions are bad for houses.  This means I would have to put outlets under the overhangs next to each camera.  Each of those outlets would be GFI's as well.  Nothing screams professional like having a big AC to DC adapter plugged next to your camera.

Monday, October 22, 2012

ASRock boot problems

One of my hard disks started to report IO errors.  Since I manually mirror files onto different disks, I know I wouldn't lose any data.  I started the process of adding a 3rd mirror for all the files on that disk, so that when it dies, I will still have two mirrors of all the files.  About two weeks later, I come home and the computer is unresponsive.  I try rebooting it, and the ASRock boot screen comes up, but it doesn't move on from there.  I tried powering it off for a few minutes, just in case it overheated.  It still didn't work.  Then I remembered the time I left a Wii disk in the dvd drive.  The bad hard disk must be preventing the computer from coming up.  I opened the case up.  I have 7 hard disks in this computer, so there was no way of determining which disk was the bad one just by looking at it.  I unplugged all the SATA cables and turned the computer on.  It beeped and the GRUB boot screen came up.  I turned the computer off, plugged in a single SATA cable and turned the computer back up.  I kept doing this until the GRUB screen no longer came up.  I took that disk out and it was the right one.  I guess ASRock motherboards are very sensitive to bad boot devices.

Saturday, October 20, 2012

MK802 and Bluetooth

So, it looks like the Android OS that comes with the MK802 doesn't support Bluetooth.  I started googling around and found an article that says you can install an unofficial version of Cyanogenmod 9 to get Bluetooth support.  It talks about some of the difficulties with installing CM9 and some of the problems people have reported.  Although I want Bluetooth support (I bought USB Bluetooth dongles!), I don't need it bad enough to go through that pain.

Friday, October 19, 2012

Every day is Black Friday!

I do most of my electronics shopping online.  I comparison shop, so I don't buy from a single retailer.  This has put me on multiple mailing lists to receive advertisements.  I have noticed an interesting trend.  Every day is Black Friday!  Historically, Black Friday featured the best deals.  Black Friday has been lasting longer and longer.  It lasted about two weeks this passed Christmas.  To inflate their sales, the various online retailers are trying to capitalize on the visceral reaction people have to Black Friday.  I'm getting sick and tired of these sales that aren't as great as they make it seem.