I have been building computers for almost 20 years. Because of this experience, people have been asking me about what video cards they should get. Video cards tend to be the single most expensive component in a computer and there is a very competitive market for then. I tend to tell them to get the cheapest video card they can find and they never listen to me. They always get the most expensive video card in the range they want to spend.
Why do I tell them to get a cheaper card? Mostly because I know they don't use any 3D tools that use the functionality of the card. 3D? What does 3D have to do with a video card? A little history. You used to have two cards in your system for video. One was your standard VGA card. The other was a 3D accelerator card. The 3D accelerator cards exposed a 3D API that programmers could use to make 3D applications faster. These 3D cards had special chips (GPU) that performed very fast 3D operations. At the time, the only applications that used the 3D API were high end games and high end graphic workstations.
Over time, the companies that made the 3D accelerator cards decided to merge the 2D VGA card with the 3D accelerator card. Since every 2D card is almost the same, the manufacturers marked the 3D features. A 3D arms race ensued and the 3D parts of the cards got better and faster and contained more features. For many years, there were still only two types of applications that used those fancy (and expensive) features: 3D games and high end graphic workstations.
This is where myth #1 was used by people who wanted high end graphics cards. They heard that the 3D features were used by high end graphics workstations. They didn't know what high end meant, but nothing is higher end than Adobe Photoshop, right? Wrong. Photoshop deals with a particular branch of graphics. Specifically, manipulation of 2D raster images. People are the most familiar with Photoshop because to a non-technical person, manipulating a 2D raster image is what graphics is. In the history section, we learned that the expensive part of the video card is for doing 3D, though. What is a high end graphics program that uses 3D? RenderMan. For those of you not familiar with RenderMan, it is the high end graphics program written by Pixar to render all the Pixar movies, Titanic, Lord of the Rings, the Star Wars prequels, and most major movies with awesome special effects. RenderMan allows you to view a lower resolution frame of the movie in 3D in real time. The graphic artist can manipulate the 3D scene for the movie. RenderMan then renders the full scene in a much higher resolution.
As time went on, people started realizing that the GPU was far better at some math than a typical CPU. GPU manufacturers added a new API so that programmers could start using the GPU for things other than real time 3D rendering. People started using the GPU for massive public projects, like decoding the human genome or searching for extra terrestrial life. Some used it to accelerate delayed 3D rendering. Enter myth #2: I do video editing, so I need a fast GPU. This stems from the fact that programs like RenderMan started using the GPU for the rendering part of the movie creation. Once again, they are doing 3D movies. Home videos are not rendered, they are captured. Although video programs like Adobe Premier do allow you to use the GPU to speed up the encoding/compression phase of making your video, it is not worth the money unless you are making movies professionally. If you make one movie a month, save the money.
Unless you are playing cutting edge video games, the main thing to consider is resolution. Modern operating systems run in 3D mode to give you fancy eye candy. This means if you want a high resolution, you need a decent video card with enough DRAM to handle the resolution. The faster your card, the smoother your eye candy will be. For video games, base your selection on the specific video game that you want to play. Most of the 3D games I play are over 5 years old.
JS Ext
Friday, November 30, 2012
Thursday, November 29, 2012
Less features are often more
Our society is crazy for features. When you want a device, you wan't the most features. You tend to pay more for the devices with more features, since it costs more to make a device with that many features. The truth is, we tend to not use those features. By knowing what features you want, and what ones you don't care about, you can actually save a lot of money. It can also cost you a lot of money if it is a really nice feature.
Once of the best examples of this is an e-book reader. When the Kindle first came out, it had a black and white e-ink display. This e-ink display used very little battery. Therefore, your device ran a really long time on a single charge. The problem was the device was back and white, though. Enter the iPad, Kindle Fire and Nook Color. These are not e-book readers. They are tablets that can be used as an e-book reader. They have far more features, are color and are far more powerful. They tend to be "better" in every way, except for battery life. Tablets screens require power to maintain their display, while e-ink does not. If you take a minute to read a page, a tablet has been draining the battery for a whole minute while the e-book reader only drained the battery for the second it took to turn the page.
This is an example where less is more. By having a black-and-white e-ink screen, the battery lasts a lot longer. If you are reading a book, you don't care about color, or the slow refresh rate. You also enjoy the fact that you can read your book outside!
By concentrating on the features you need (I should be able to read a book) and not on the features you don't need (I don't need to play games or watch a video), you can save yourself a lot of money. How much money? The Txtr Beagle is a new e-book reader that is coming out soon. The expected price for it is $13. That is not a typo. Compare that with the $499 you would spend for the current top of the line iPad with retina display. That is a 97.4% or $486 in savings.
How does a company get an e-book reader to cost that low? By removing features. First, one of the most expensive parts of an e-book reader is the battery. Built in rechargeable Lithium Ion batteries are expensive. You also have to ship a charger with the device for the end user to charge with. With the Beagle, you just use AAA batteries. No need to ship a usb or power cable. The device can do that because it requires so little power, since it is a true e-book reader. It uses e-ink and only allows you to read books. When I told some people about this, they looked at me weird. They look at it as being overly cheap, when I think it is genius. It drastically lowers the cost.
The device does not contain WiFi or 3G. You have to load books into it using a Bluetooth device. The Beagle only contains 4GB of space for books. For a true e-book reader, that is plenty of space. E-books are small. Its the music and videos that eat up space on a tablet.
This device does one thing and it does it well. They innovated by removing features, not adding features. The biggest feature of this device is the price. While you get what you pay for, if you really are looking for a plain old e-book reader, then it is well worth the money.
Although this is a bit of an extreme example (I don't think I can find a 97% savings on anything else), you should figure out what features you want, and what features you don't need. Less is more.
Once of the best examples of this is an e-book reader. When the Kindle first came out, it had a black and white e-ink display. This e-ink display used very little battery. Therefore, your device ran a really long time on a single charge. The problem was the device was back and white, though. Enter the iPad, Kindle Fire and Nook Color. These are not e-book readers. They are tablets that can be used as an e-book reader. They have far more features, are color and are far more powerful. They tend to be "better" in every way, except for battery life. Tablets screens require power to maintain their display, while e-ink does not. If you take a minute to read a page, a tablet has been draining the battery for a whole minute while the e-book reader only drained the battery for the second it took to turn the page.
This is an example where less is more. By having a black-and-white e-ink screen, the battery lasts a lot longer. If you are reading a book, you don't care about color, or the slow refresh rate. You also enjoy the fact that you can read your book outside!
By concentrating on the features you need (I should be able to read a book) and not on the features you don't need (I don't need to play games or watch a video), you can save yourself a lot of money. How much money? The Txtr Beagle is a new e-book reader that is coming out soon. The expected price for it is $13. That is not a typo. Compare that with the $499 you would spend for the current top of the line iPad with retina display. That is a 97.4% or $486 in savings.
How does a company get an e-book reader to cost that low? By removing features. First, one of the most expensive parts of an e-book reader is the battery. Built in rechargeable Lithium Ion batteries are expensive. You also have to ship a charger with the device for the end user to charge with. With the Beagle, you just use AAA batteries. No need to ship a usb or power cable. The device can do that because it requires so little power, since it is a true e-book reader. It uses e-ink and only allows you to read books. When I told some people about this, they looked at me weird. They look at it as being overly cheap, when I think it is genius. It drastically lowers the cost.
The device does not contain WiFi or 3G. You have to load books into it using a Bluetooth device. The Beagle only contains 4GB of space for books. For a true e-book reader, that is plenty of space. E-books are small. Its the music and videos that eat up space on a tablet.
This device does one thing and it does it well. They innovated by removing features, not adding features. The biggest feature of this device is the price. While you get what you pay for, if you really are looking for a plain old e-book reader, then it is well worth the money.
Although this is a bit of an extreme example (I don't think I can find a 97% savings on anything else), you should figure out what features you want, and what features you don't need. Less is more.
Wednesday, November 28, 2012
Mark and Sweep Optimization: Generations
One of the performance penalties of the mark-and-sweep algorithm is that it searches all active objects in your heap. This means the larger your heap is, the longer the GC cycle takes. One way to increase performance is to split your heap into generations.
Let's split the heap into three generations: young, middle and old. All new objects go into the young generation. When you fill up the young generation, you GC that generation only. If an object survives enough GC cycles, then that object gets moved to the middle generation. If that generation fills up, you GC that generation and move any longer lived objects into the next generation.
If you keep the heap size of this generation small, then the GC cycle is really fast. In a typical Java application, most of your objects are GC'ed really fast because they are temporary objects. Think of all the BigDecimal and String instances that you manipulate. Those classes are final; every operation creates a new instance that can now be GC'ed in the young generation.
With a generational garbage collector, you perform more GC cycles, but since the young generation is small each young generation GC cycle is a lot faster. The longer GC cycles run far less frequently.
Let's split the heap into three generations: young, middle and old. All new objects go into the young generation. When you fill up the young generation, you GC that generation only. If an object survives enough GC cycles, then that object gets moved to the middle generation. If that generation fills up, you GC that generation and move any longer lived objects into the next generation.
If you keep the heap size of this generation small, then the GC cycle is really fast. In a typical Java application, most of your objects are GC'ed really fast because they are temporary objects. Think of all the BigDecimal and String instances that you manipulate. Those classes are final; every operation creates a new instance that can now be GC'ed in the young generation.
With a generational garbage collector, you perform more GC cycles, but since the young generation is small each young generation GC cycle is a lot faster. The longer GC cycles run far less frequently.
Tuesday, November 27, 2012
MK802: FoxFi
I use ethernet for my MK802. This means I have an available WiFi card to play around with. I decided to create a guest WiFi access point using FoxFi. FoxFi is a tethering app for phones. The goal was to "tether" the WiFi over ethernet. Guests would get an easy password while my normal WiFi would have the longer, more cryptic password. Unfortunately, FoxFi crashed on the MK802. It probably has something to do with no 3G connection. O well.
Monday, November 26, 2012
Android ethernet support
Android technically supports ethernet. Most devices do not support ethernet though. The only way to access your local network is over wifi on those devices. I have the I/O Crest SY-ADA24005 USB 2.0 Ethernet Adapter connected to the MK802. I have been playing around with various DLNA apps for Android. A few of the apps won't run in my configuration. They detect if wifi is enabled. If it is not enabled, then it pops up a message telling you to enable wifi. This feature can be user friendly for phones, but it means I can't use the app without disabling ethernet. This can be annoying.
Friday, November 23, 2012
Forcing orientation in Android
Some apps want you to hold your phone in a particular way. They want the screen to be tall instead of wide. Although I understand only wanting to support one orientation, it makes it difficult to use the app on a device that does not physically rotate. Specifically, I can't rotate my TV! I find it very frustrating when I start up an app on my MK802 and the screen orientation changes. That change automatically changes how the mouse works as well. When I try to move the mouse down, the mouse actually moves to the right, since the right side of the screen is the bottom side of the app. It becomes very difficult to close with the mouse. Luckily, my Mele Air Mouse has back and arrow keys that allow me to exit most applications.
Thursday, November 22, 2012
Tracking HashMap resizes
In Java, the initial size of a HashMap can be important. The HashMap size is a power of two and doubles every time the container size exceeds the load ratio (if your HashMap size is 16 with load ratio of 0.75, then after you add the 13th item, the HashMap grows to 32). The HashMap resize operation is a pretty intensive operation. It is recommended to minimize the number of resizes a HashMap will do. The initial size turns out to be pretty important when considering the performance of your application.
One thing that is missing in Java's default HashMap implementation is the ability to track the resizes of a HashMap. It would be nice to write a log message every time the HashMap resizes. In this log message, I would like to see the old and new size, as well as the class/line number of the code that created the HashMap. After reviewing the logs, you can get an idea of which HashMaps are not initially sized correctly. Unfortunatly, the HashMap.resize() method is default scoped, instead of protected scoped, so we can't override the method to add the log message.
One thing that is missing in Java's default HashMap implementation is the ability to track the resizes of a HashMap. It would be nice to write a log message every time the HashMap resizes. In this log message, I would like to see the old and new size, as well as the class/line number of the code that created the HashMap. After reviewing the logs, you can get an idea of which HashMaps are not initially sized correctly. Unfortunatly, the HashMap.resize() method is default scoped, instead of protected scoped, so we can't override the method to add the log message.
Wednesday, November 21, 2012
MK802: TNT Streaming App
I got excited when I saw this article on GeekSugar about TV apps for your iPhone and Android devices. I thought to myself, I have an MK802! I went through the slides. The apps can be put into two categories: TV Providers and Channels. A lot of TV Providers, like Comcat and Time Warner have started to provide apps to stream TV. You have to be a subscriber to take advantage of those services. I have Verizon FiOS, which is NOT on the list of TV Providers that have a streaming app. The next category are the channels that provide their content to a streaming app. Most are premium channels, like HBO and Showtime, but TNT was on the list. There are a few TNT shows that my wife and I watch. I fired up the Play store on my MK802 and discovered that it wasn't available. I decided to install the app onto my Nexus 7 and used Bluetooth App Sender to upload the APK file to my Dropbox account. On the MK802, I installed the APK file off of Dropbox. After starting the app, two things happened: 1) the screen rotated so all the content was sideways and 2) the app crashed. O well.
Tuesday, November 20, 2012
Whole disk encryption
I hear a lot about the pros (and sometimes the cons) of disk encryption. You hear about government laptops being lost or stolen and the question arises, why wasn't the hard disk encrypted. You hear about accused criminals encrypting hard disks so that the prosecution can't get any evidence. Users will encrypt their entire disk or a portion of it when storing tax or other personal information. There seems to be some misunderstanding on how the technology works, and what it can and cannot do.
First, we need a little background on encryption. There are two different types of encryption: 1-way and 2-way encryption. In 1-way, data only flows in one direction. You can only encrypt the information. You can't decrypt it. This is commonly called hashing. This type of encryption might seem worthless (and it doesn't have much use in disk encryption), but it has a whole lot of useful purposes that are unrelated to disk encryption. 2-way encryption is the type that allows you to encrypt and decrypt your data. 2-way encryption is protected by a key. In disk encryption, that key is usually a password, but not always. In disk encryption, you use the key to encrypt the data and you use the key to decrypt the data. The main thing to learn is that in order to read (decrypt) the encrypted data, you must enter a password/key. This becomes important when talking about whole disk encryption.
Disk encryption comes in to flavors: whole disk encryption and folder/file encryption. In whole disk encryption, your entire disk is encrypted. In folder/file encryption, only a section of your disk is encrypted. Most media outlets tend to talk about whole disk encryption as the technology everyone should use. They never mention any of the downsides. They never say why it might be a good idea to use folder encryption.
Lets start with starting your computer. If you have whole disk encryption, boot your computer. If the technology is used correctly, it should prompt you for a password immediately, before Windows even boots. If it did not, then the technology is flawed! The problem here is the battle between security and user friendliness. It is not very user friendly to force a user to enter in a password to turn on your computer, then enter a different password to log into your computer. Some vendors try to get the best of both worlds by using the computer fingerprint as your key (this is why I made the distinction above about password vs key). Your computer has a unique set of hardware in it. The disk encryption software can look at the hardware and generate a key that can be used for encrypting the hard disk. This means the disk is encrypted and you don't have to enter in a password. Seems nice, but two things should pop up in your head: 1) what if I change the hardware, but most importantly 2) what if someone steals the ENTIRE computer. This encryption scheme doesn't help the government agencies who lost entire laptops of social security numbers.
Another problem that is often overlooked is performance. Encrypting and decrypting data uses your CPU. The more it uses your CPU, the less CPU is available for every other program that is running on your computer. There are CPUs out there that are far more powerful than consumers need, but there is a growing trend to use power-effecient CPUs instead of power-hungry CPUs. On top of that, encrypted data is larger. Depending on the technology that is used, it could be 50% larger. Although that eats up more disk space, the bigger problem is that you must transfer more data to memory to decrypt it before you can use it. That means disk reads/writes are a lot slower, and the disk encryption software is consuming a chunk of your RAM. Depending on what tasks you are performing, these penalties can be pretty significant. The more of a power user you are, the more you will feel this pain.
Encryption is supposed to increase security, but there is one area that it doesn't even try to help: spyware/malware. If your computer gets attacked by malware, and you use whole disk encryption, you have already encrypted the disk. The malware has access to every file that it would have had access to if you didn't encrypt your hard disk. There is no protection there.
Lets talk about folder encryption. Most operating systems support this right out of the box. In folder encryption, your computer boots just like it normally did before encryption. You only get prompted for a password when you try to access an encrypted folder. You can also have multiple encrypted folders, each with a different password. Although this can get confusing, it can help segregate your important information. If someone steals your entire computer, your important folders are still encrypted. The thief still gets your bookmarks or any data that you didn't encrypt, but the responsibility is on you to determine what is important enough to encrypt.
Since you are only encrypting your sensitive information, you do not suffer the performance penalty when going about your day-to-day activities. You only suffer the problem when accessing your personal information. For some of us, that is once a year when you do your taxes. If your computer gets attacked, your folders are still encrypted. It is a lot harder for the malware to steal the sensitive information (although still possible, it just makes it a lot harder).
I tend to hear security professionals compare computer security with bank security. You can't have an absolutely secure system. You have layers of security. Whole disk encryption is an attempt to have absolute security. Folder encryption is a layer. You protect the information that is the most important to you. You shouldn't be trying to encrypt all the day-to-day activities that you do (unless you are a business or a criminal). For personal computers, protect the information that should be protected.
First, we need a little background on encryption. There are two different types of encryption: 1-way and 2-way encryption. In 1-way, data only flows in one direction. You can only encrypt the information. You can't decrypt it. This is commonly called hashing. This type of encryption might seem worthless (and it doesn't have much use in disk encryption), but it has a whole lot of useful purposes that are unrelated to disk encryption. 2-way encryption is the type that allows you to encrypt and decrypt your data. 2-way encryption is protected by a key. In disk encryption, that key is usually a password, but not always. In disk encryption, you use the key to encrypt the data and you use the key to decrypt the data. The main thing to learn is that in order to read (decrypt) the encrypted data, you must enter a password/key. This becomes important when talking about whole disk encryption.
Disk encryption comes in to flavors: whole disk encryption and folder/file encryption. In whole disk encryption, your entire disk is encrypted. In folder/file encryption, only a section of your disk is encrypted. Most media outlets tend to talk about whole disk encryption as the technology everyone should use. They never mention any of the downsides. They never say why it might be a good idea to use folder encryption.
Lets start with starting your computer. If you have whole disk encryption, boot your computer. If the technology is used correctly, it should prompt you for a password immediately, before Windows even boots. If it did not, then the technology is flawed! The problem here is the battle between security and user friendliness. It is not very user friendly to force a user to enter in a password to turn on your computer, then enter a different password to log into your computer. Some vendors try to get the best of both worlds by using the computer fingerprint as your key (this is why I made the distinction above about password vs key). Your computer has a unique set of hardware in it. The disk encryption software can look at the hardware and generate a key that can be used for encrypting the hard disk. This means the disk is encrypted and you don't have to enter in a password. Seems nice, but two things should pop up in your head: 1) what if I change the hardware, but most importantly 2) what if someone steals the ENTIRE computer. This encryption scheme doesn't help the government agencies who lost entire laptops of social security numbers.
Another problem that is often overlooked is performance. Encrypting and decrypting data uses your CPU. The more it uses your CPU, the less CPU is available for every other program that is running on your computer. There are CPUs out there that are far more powerful than consumers need, but there is a growing trend to use power-effecient CPUs instead of power-hungry CPUs. On top of that, encrypted data is larger. Depending on the technology that is used, it could be 50% larger. Although that eats up more disk space, the bigger problem is that you must transfer more data to memory to decrypt it before you can use it. That means disk reads/writes are a lot slower, and the disk encryption software is consuming a chunk of your RAM. Depending on what tasks you are performing, these penalties can be pretty significant. The more of a power user you are, the more you will feel this pain.
Encryption is supposed to increase security, but there is one area that it doesn't even try to help: spyware/malware. If your computer gets attacked by malware, and you use whole disk encryption, you have already encrypted the disk. The malware has access to every file that it would have had access to if you didn't encrypt your hard disk. There is no protection there.
Lets talk about folder encryption. Most operating systems support this right out of the box. In folder encryption, your computer boots just like it normally did before encryption. You only get prompted for a password when you try to access an encrypted folder. You can also have multiple encrypted folders, each with a different password. Although this can get confusing, it can help segregate your important information. If someone steals your entire computer, your important folders are still encrypted. The thief still gets your bookmarks or any data that you didn't encrypt, but the responsibility is on you to determine what is important enough to encrypt.
Since you are only encrypting your sensitive information, you do not suffer the performance penalty when going about your day-to-day activities. You only suffer the problem when accessing your personal information. For some of us, that is once a year when you do your taxes. If your computer gets attacked, your folders are still encrypted. It is a lot harder for the malware to steal the sensitive information (although still possible, it just makes it a lot harder).
I tend to hear security professionals compare computer security with bank security. You can't have an absolutely secure system. You have layers of security. Whole disk encryption is an attempt to have absolute security. Folder encryption is a layer. You protect the information that is the most important to you. You shouldn't be trying to encrypt all the day-to-day activities that you do (unless you are a business or a criminal). For personal computers, protect the information that should be protected.
Monday, November 19, 2012
Simple Technology: Watering your Christmas tree
Technology tends to focus on creating new products using the most complex science of the day. During the industrial revolution, many new inventions used steam to power them. During the electronics age, everything we powered with tiny motors and relays. In the computer age, every new "thing" either has a computer, is a computer, or runs on a computer. Although I love the latest and greatest, I love it even more when something new comes out that uses older technology, and it works better than another other fancier device.
Christmas trees tend to consume a lot of water. You tend to bend over a lot adding more water once or twice a day. It can be hard to reach that far to add water. What happens if you have to travel for the holidays? Last year, I started to research a watering system for the Christmas tree. I heard of funnel systems with tubes, but they were unsightly. I saw electronic systems that notified you that the water level was low. That doesn't exactly water the tree when it needs it, it just notified you when it needs to be watered. The next thing I saw was a box with a tube. The box looked like a Christmas present. The tube went from the bottom of the box to the water reservoir of a standard Christmas tree stand.
The device was a Siphon pump. You fill the box with water. As the Christmas tree uses the water in the stand's reservoir, the box automatically fills the reservoir up to the top. It is "smart" enough to NEVER overfill the reservoir! The box essentially triples the effective reservoir size for the tree. The fact that old technology can be so great seems seems so counter intuitive. How can a device with no moving parts be better than a device with a speaker and moisture sensor? How does a device with no sensor automatically know when to stop filling? How does the water get pumped to the tree stand?
This device uses technology that is 3500 years old. The box siphons water into the tree stand. Once the pump is primed, the water level inside of the box matches the water level of the stand. As you add more water to one, the siphoning action pumps the water to the other end. If you remove water from one end, then the siphon moves water from the end that has more water to the end that has less water. When you first set up the system, you add water until the water level in the tree stand is at its highest. That tells you the max fill level in the box. The tree uses the water in both reservoirs. When it gets low, you fill the box back up to the fill level.
The beauty of this technology is that it fills a need perfectly. It automatically waters the tree. There is very little that can go wrong with the box. Since it looks like a Christmas present, it is aesthetically pleasing. Due to the simple design, it is cheap to produce. It just works.
Christmas trees tend to consume a lot of water. You tend to bend over a lot adding more water once or twice a day. It can be hard to reach that far to add water. What happens if you have to travel for the holidays? Last year, I started to research a watering system for the Christmas tree. I heard of funnel systems with tubes, but they were unsightly. I saw electronic systems that notified you that the water level was low. That doesn't exactly water the tree when it needs it, it just notified you when it needs to be watered. The next thing I saw was a box with a tube. The box looked like a Christmas present. The tube went from the bottom of the box to the water reservoir of a standard Christmas tree stand.
The device was a Siphon pump. You fill the box with water. As the Christmas tree uses the water in the stand's reservoir, the box automatically fills the reservoir up to the top. It is "smart" enough to NEVER overfill the reservoir! The box essentially triples the effective reservoir size for the tree. The fact that old technology can be so great seems seems so counter intuitive. How can a device with no moving parts be better than a device with a speaker and moisture sensor? How does a device with no sensor automatically know when to stop filling? How does the water get pumped to the tree stand?
This device uses technology that is 3500 years old. The box siphons water into the tree stand. Once the pump is primed, the water level inside of the box matches the water level of the stand. As you add more water to one, the siphoning action pumps the water to the other end. If you remove water from one end, then the siphon moves water from the end that has more water to the end that has less water. When you first set up the system, you add water until the water level in the tree stand is at its highest. That tells you the max fill level in the box. The tree uses the water in both reservoirs. When it gets low, you fill the box back up to the fill level.
The beauty of this technology is that it fills a need perfectly. It automatically waters the tree. There is very little that can go wrong with the box. Since it looks like a Christmas present, it is aesthetically pleasing. Due to the simple design, it is cheap to produce. It just works.
Friday, November 16, 2012
Locking out production ID's
An interesting debate pops up from time to time. From a security stand point, it is generally a good idea to lock out an account if there are too many failed log in attempts. This is done to prevent a dictionary or brute force attack. You disable the account to prevent the password from being leaked.
Lets go into a data center now. You have a website that connects to a database. The account the website uses to connect to the database is protected with a username and password. This begs the question, do you enforce the same lock out rules for this database account?
Based on the first paragraph, it seems obvious that for security purposes you should lock out the ID. If you lock out the ID, then your website goes down! That is called a denial of service attack. It now becomes incredibly simple to lock out the ID and force a website to be down. Conversely, you have to prevent brute force or dictionary attacks.
To wrap up, you might want to have a password lockout policy for database users for security reasons, but you may NOT want to have a password lockout policy for database users for security reasons.
Lets go into a data center now. You have a website that connects to a database. The account the website uses to connect to the database is protected with a username and password. This begs the question, do you enforce the same lock out rules for this database account?
Based on the first paragraph, it seems obvious that for security purposes you should lock out the ID. If you lock out the ID, then your website goes down! That is called a denial of service attack. It now becomes incredibly simple to lock out the ID and force a website to be down. Conversely, you have to prevent brute force or dictionary attacks.
To wrap up, you might want to have a password lockout policy for database users for security reasons, but you may NOT want to have a password lockout policy for database users for security reasons.
Thursday, November 15, 2012
Mark-and-Sweep Garbage Collection
Mark-and-Sweep is an algorithm for garbage collection. Lets start with the concept of a root. A root is a starting point for searches. Root's are usually static variables and stack variables. Roots point to objects in the heap. Those objects can then point to other objects on the heap. As you create new objects, you keep track of how much memory you are using. If, during your next allocation, you need more space than is available, you invoke the Mark-and-Sweep algorithm.
First, the world is stopped. Then, the algorithm starts from the roots and traverses the object tree (imagine a depth-first-search through your objects). Every time the algorithm visits an object, it "marks" it. This usually means flagging a bit, but it could mean other things that I will get to later. After marking the object, it continues searching for more objects to mark. If an object is already marked, it does not traverse that leaf anymore, this allowing circular object references.
Once the mark phase is done, the sweep phase is initiated. During the sweep phase, all objects that were previously marked are moved and rearranged so that all the objects are now sitting next to each other on the heap. This sweep operation will just override any blocks of memory that were not "marked". After the sweep phase, all objects that are no longer referenced are just gone.
Many critics of mark-and-sweep point out that the "stop the world" of the garbage collection can take a long time. This can make an application "feel" unresponsive since the app is literally not doing anything during the GC cycle, because it would be bad to change pointers while doing a depth first search. In older versions of Java, the GC cycle did consume a lot of time, especially for large apps. Why would use use such an algorithm then?
The main advantage of mark-and-sweep is that it enables you to do other things faster that you can't do with other memory management solutions. Specifically, allocating objects is really fast. In most other memory management solutions, you have to search for an open spot in memory. This is a linear search every time you allocate an object. With mark-and-sweep, you maintain a pointer to the first free memory block. To allocate memory, you just use the free memory pointer that you currently have. You then save a new free memory pointer that is just after the block you just allocated. That is constant time allocation! The penalty is periodic GCs that are linear to the size of your heap. Those GC's can be managed, however. I'll talk about that in a future blog post.
Advantages
First, the world is stopped. Then, the algorithm starts from the roots and traverses the object tree (imagine a depth-first-search through your objects). Every time the algorithm visits an object, it "marks" it. This usually means flagging a bit, but it could mean other things that I will get to later. After marking the object, it continues searching for more objects to mark. If an object is already marked, it does not traverse that leaf anymore, this allowing circular object references.
Once the mark phase is done, the sweep phase is initiated. During the sweep phase, all objects that were previously marked are moved and rearranged so that all the objects are now sitting next to each other on the heap. This sweep operation will just override any blocks of memory that were not "marked". After the sweep phase, all objects that are no longer referenced are just gone.
Many critics of mark-and-sweep point out that the "stop the world" of the garbage collection can take a long time. This can make an application "feel" unresponsive since the app is literally not doing anything during the GC cycle, because it would be bad to change pointers while doing a depth first search. In older versions of Java, the GC cycle did consume a lot of time, especially for large apps. Why would use use such an algorithm then?
The main advantage of mark-and-sweep is that it enables you to do other things faster that you can't do with other memory management solutions. Specifically, allocating objects is really fast. In most other memory management solutions, you have to search for an open spot in memory. This is a linear search every time you allocate an object. With mark-and-sweep, you maintain a pointer to the first free memory block. To allocate memory, you just use the free memory pointer that you currently have. You then save a new free memory pointer that is just after the block you just allocated. That is constant time allocation! The penalty is periodic GCs that are linear to the size of your heap. Those GC's can be managed, however. I'll talk about that in a future blog post.
Advantages
- Fast object/memory allocation
- Memory limits
- Mark-and-sweep is implemented using malloc/free under the hood. With this abstraction layer, you can actually set a limit on how much memory your process can use
- Locality
- There are two important steps that increase locality
- New objects are created next to each other in the heap
- Objects that survive a GC are put next to each other on the heap
- By having objects next to each other in the heap, you minimize cache misses
Disadvantages
- Long GC cycles give the appearance of slowness
- This can be mitigated
- Excessive use of memory
- Since GC cycles don't kick off until you run out of memory, you tend to use more memory
- Heap sizes increase when you run out of memory. If that increase was due to a temporary spike, your heap size never decreases
- Tends to be abused
- Developers who have never needed to do their own memory management tend to abuse automated memory management systems, so they tend to use more memory
Wednesday, November 14, 2012
MK802: BS Player
MX Player doesn't allow you to control the network streaming buffer size. Because of this, I decided to try out BS Player. BS Player didn't have a Neon Arm V7 codec; they just have a regular Arm V7 codec. Unfortunately, BS Player still won't play most of my video files fast enough, even on my 720P TV. Back to MX Player. I guess I have to wait for MX Player to get some more features.
Tuesday, November 13, 2012
WiFi Extenders
I have been looking for a WiFi extender that bridges over ethernet. Many WiFi extenders are simply repeaters. This means you connect to the repeater over WiFi, and the repeater connects to your WiFi router over WiFi. Most consumers want this setup because you don't have to worry about running ethernet cable to the far ends of your house. You buy a repeater to essentially boost your signal. Repeaters don't increase bandwidth, though.
Imagine a simple setup with one WiFi router and two wireless devices. If both devices are being used at the same time, then they have to split the bandwidth between the two. They are sharing the same WiFi channel. Extend that to the repeater network layout. You still have two devices, but they each have an access point. The problem is the repeater acts like another wireless device. In reality, your router has two devices sharing a channel and the repeater has a single device on it. You still only have half the bandwidth.
With ethernet bridging, you have two access points. You connect the access points to each other with high speed ethernet cable. It is a bit over-simplified, but you can now have two devices, and each device gets the full 802.11a/b/n bandwidth. When you have lots of wireless devices (3 laptops, 2 cell phones, 2 MK802s and one desktop), your devices can easily conflict with each other. Things get worse when you start buying WiFi IP cameras.
I finally found a relatively cheap extender that allowed me to bridge over ethernet. I bought it and started reading the instructions (imagine that!). The instructions stated that bridging over ethernet was not recommended! I tried it anyways. Once I enabled the extender, the entire WiFi network came down. I couldn't connect to either access point.
My second extender I tried was the Uspeed Wifi Repeater. This device is designed to be a repeater. It is designed to plug into an outlet in inconvenient locations. It does support ethernet bridging though. I set up the bridging and it started working. It took less than 5 minutes to set up. I now have two access points.
Imagine a simple setup with one WiFi router and two wireless devices. If both devices are being used at the same time, then they have to split the bandwidth between the two. They are sharing the same WiFi channel. Extend that to the repeater network layout. You still have two devices, but they each have an access point. The problem is the repeater acts like another wireless device. In reality, your router has two devices sharing a channel and the repeater has a single device on it. You still only have half the bandwidth.
With ethernet bridging, you have two access points. You connect the access points to each other with high speed ethernet cable. It is a bit over-simplified, but you can now have two devices, and each device gets the full 802.11a/b/n bandwidth. When you have lots of wireless devices (3 laptops, 2 cell phones, 2 MK802s and one desktop), your devices can easily conflict with each other. Things get worse when you start buying WiFi IP cameras.
I finally found a relatively cheap extender that allowed me to bridge over ethernet. I bought it and started reading the instructions (imagine that!). The instructions stated that bridging over ethernet was not recommended! I tried it anyways. Once I enabled the extender, the entire WiFi network came down. I couldn't connect to either access point.
My second extender I tried was the Uspeed Wifi Repeater. This device is designed to be a repeater. It is designed to plug into an outlet in inconvenient locations. It does support ethernet bridging though. I set up the bridging and it started working. It took less than 5 minutes to set up. I now have two access points.
Monday, November 12, 2012
MediaTomb
I have been playing with DLNA to stream video to my MK802. I started trying out various DLNA servers which lead me to MediaTomb. MediaTomb is a great DLNA media server that supports storing information in a database and has a web interface for administration. It was easy to set up and get running. It was in the Gentoo Portage tree and came with a decent default configuration. MediaTomb does hash all of your files, so if your files are on a network drive, it may take a while to scan everything. I started using various DLNA media players on the MK802 while the scan was still running. It seemed to work very well. Then something weird started to happen. The folder structure didn't match what I had. I organize my files into folders. I have folders inside of folders. I always felt that was kind of the point of folders. You can nest them! What I noticed was MediaTomb flattened the folders. When browsing, you get a list of folders, regardless of their parent or depth. Instead of having a movies and a shows folder, I had a lot of folders in the root. I couldn't navigate.
Friday, November 9, 2012
Mele Fly Sky Air Mouse
The Mele Fly Sky Air Mouse is a really nice device. I have been using the remote for a few days on the MK802. Being part mouse, part remote and part keyboard can get a little confusing for users. One side of the Air Mouse is a full querty keyboard. The other side is 1/2 mouse and 1/2 remote control. When holding the device as a remote control, it is the perfect size. It contains up/down/left/right buttons that you would find on a remote control. The center button inside of the D-buttons is the mouse left click however. Above the D-buttons are Enter and Back keys. If you are using the D-buttons and want to select something, you have to use the Enter button, not the left mouse button. I find it very counter intuitive, but you learn.
The Home button does not take you to the home screen. The Settings button does open the settings menu, however. The Volume up and down buttons to adjust the volume, but the Mute button does not Mute. It is worth noting that almost every button that DOESN'T have a default action does get sent to the app. In a future post, I will give out all the integer codes for the Air Mouse. I was pleasantly surprised to find that the arrow keys worked in far more Android interfaces than I thought it would. It is almost like the Android developers has some foresight that someone would want to use a D-pad to navigate. This was nice since using the mouse took a little effort.
The mouse kind of acts like a WiiMote in the sense that waving the Air Mouse moves the mouse pointer. It is not a pointer, though. Just pointing to the icon you want to click won't work like it does on the WiiMote. If I had to guess, I think the Air Mouse contains a gyroscope. The gyroscope measures changes in pitch and yaw. Yaw changes move the mouse left and right. Pitch changes move the mouse up and down. The hard part to get used to is the fact that the mouse moves based on the change of pitch and yaw, not the absolute value like a WiiMote. For example, point a WiiMote toward the ground. The tilt it up 45 degrees. Since you only tilted up 45 degrees, you are still mostly pointing toward the ground. The Wii's mouse pointer won't move. Now do the same thing with the Air Mouse. The mouse pointer moves up! It doesn't matter what direction the Air Mouse is pointing; the mouse will move based on the pitch and yaw changes. It takes some getting used to, but I adapted pretty quick. Another thing to note is altitude, roll and side-to-side changes do not impact the mouse pointer.
There are some pluses and minuses of this setup versus the WiiMote pointer style. WiiMotes can get shaky when trying to select something. The pointer drifts because most people can't keep their arms perfectly still. For the Air Mouse, you will find that the mouse pointer doesn't have this shakiness. If you are currently pointing at what you want to point at, it doesn't move away from it that often. The Air Mouse only moves the mouse if you have changed the pitch or yaw enough. This threshold is set frustratingly high, but I have a feeling it is set that way to prevent some of the jitters that the WiiMote suffers. It does make it hard to select small buttons on the screen, however.
The keys on the keyboard were large enough to hit very easily, but the keyboard was a little too wide to type at a fast rate. I think this was a design trade off with being a remote and a keyboard. If they made the keyboard width smaller, then the remote wouldn't be as long and it might be a little awkward to hold as a remote control. Because of that, I can't fault the designers for this. I found the Shift and FN keys a little counter intuitive. The Shift key acts as a standard Shift key. It only makes things upper case if you hold down the Shift key while typing a letter key. The FN key is a mode, however. Every time you hit the FN key, it switches between the white and the orange keys. This gets difficult when you are typing something that requires you to go back and forth between white and orange. The FN key doesn't always work so you don't know which mode you are in until you type. This can be deomonstrated by typing in a url. You start with H, T, T, P in white, then Colon in orange, then Slash and Slash in white.
Overall, I am very happy with the device. The one mouse has had slight problems that required a reboot (there is a small button to press on the side), but it hasn't happened often. I have been able to navigate throughout most interfaces with ease. I can switch between using it as a remote, a mouse and a keyboard to navigate, configure and use my Android TV.
The Home button does not take you to the home screen. The Settings button does open the settings menu, however. The Volume up and down buttons to adjust the volume, but the Mute button does not Mute. It is worth noting that almost every button that DOESN'T have a default action does get sent to the app. In a future post, I will give out all the integer codes for the Air Mouse. I was pleasantly surprised to find that the arrow keys worked in far more Android interfaces than I thought it would. It is almost like the Android developers has some foresight that someone would want to use a D-pad to navigate. This was nice since using the mouse took a little effort.
The mouse kind of acts like a WiiMote in the sense that waving the Air Mouse moves the mouse pointer. It is not a pointer, though. Just pointing to the icon you want to click won't work like it does on the WiiMote. If I had to guess, I think the Air Mouse contains a gyroscope. The gyroscope measures changes in pitch and yaw. Yaw changes move the mouse left and right. Pitch changes move the mouse up and down. The hard part to get used to is the fact that the mouse moves based on the change of pitch and yaw, not the absolute value like a WiiMote. For example, point a WiiMote toward the ground. The tilt it up 45 degrees. Since you only tilted up 45 degrees, you are still mostly pointing toward the ground. The Wii's mouse pointer won't move. Now do the same thing with the Air Mouse. The mouse pointer moves up! It doesn't matter what direction the Air Mouse is pointing; the mouse will move based on the pitch and yaw changes. It takes some getting used to, but I adapted pretty quick. Another thing to note is altitude, roll and side-to-side changes do not impact the mouse pointer.
There are some pluses and minuses of this setup versus the WiiMote pointer style. WiiMotes can get shaky when trying to select something. The pointer drifts because most people can't keep their arms perfectly still. For the Air Mouse, you will find that the mouse pointer doesn't have this shakiness. If you are currently pointing at what you want to point at, it doesn't move away from it that often. The Air Mouse only moves the mouse if you have changed the pitch or yaw enough. This threshold is set frustratingly high, but I have a feeling it is set that way to prevent some of the jitters that the WiiMote suffers. It does make it hard to select small buttons on the screen, however.
The keys on the keyboard were large enough to hit very easily, but the keyboard was a little too wide to type at a fast rate. I think this was a design trade off with being a remote and a keyboard. If they made the keyboard width smaller, then the remote wouldn't be as long and it might be a little awkward to hold as a remote control. Because of that, I can't fault the designers for this. I found the Shift and FN keys a little counter intuitive. The Shift key acts as a standard Shift key. It only makes things upper case if you hold down the Shift key while typing a letter key. The FN key is a mode, however. Every time you hit the FN key, it switches between the white and the orange keys. This gets difficult when you are typing something that requires you to go back and forth between white and orange. The FN key doesn't always work so you don't know which mode you are in until you type. This can be deomonstrated by typing in a url. You start with H, T, T, P in white, then Colon in orange, then Slash and Slash in white.
Overall, I am very happy with the device. The one mouse has had slight problems that required a reboot (there is a small button to press on the side), but it hasn't happened often. I have been able to navigate throughout most interfaces with ease. I can switch between using it as a remote, a mouse and a keyboard to navigate, configure and use my Android TV.
Thursday, November 8, 2012
MK802: Accelerated MX Player
Update: The new MK802 III supports H/W decoding for all files
In a previous post, I mentioned how MX Player wouldn't use the H/W decoder in the MK802. The Play store lists codec packs for MX Player but I didn't know which one to use. On top of that, the codecs say not to install them unless MX Player specifically tells you to install it. MX Player wasn't telling me which one. Instead of trial and error, I decided to just try different players. One of the I tried was called aVia. aVia Wouldn't use the hardware decoder, but a popup came up telling me to install the Arm V7 Neon codec. I installed it and aVia started to use the hardware decoder! A quick check revealed that MX Player had a codec with the same name. MX Player started to use the hardware decoder. The hardware decoder wasn't used for xvid files, but the software decoder was now fast enough to play the files perfectly fine on my 720P TV. MX Player still won't play video well enough on my 1080 TV.
In a previous post, I mentioned how MX Player wouldn't use the H/W decoder in the MK802. The Play store lists codec packs for MX Player but I didn't know which one to use. On top of that, the codecs say not to install them unless MX Player specifically tells you to install it. MX Player wasn't telling me which one. Instead of trial and error, I decided to just try different players. One of the I tried was called aVia. aVia Wouldn't use the hardware decoder, but a popup came up telling me to install the Arm V7 Neon codec. I installed it and aVia started to use the hardware decoder! A quick check revealed that MX Player had a codec with the same name. MX Player started to use the hardware decoder. The hardware decoder wasn't used for xvid files, but the software decoder was now fast enough to play the files perfectly fine on my 720P TV. MX Player still won't play video well enough on my 1080 TV.
Wednesday, November 7, 2012
Reference Counting Garbage Collection
Reference Counting is a mechanism for garbage collection. Every object gets a counter. When you create the object, the counter starts at zero. When the object is no longer in scope, the counter gets decremented. When another object or scope gets a reference to the object, it increments the counter by one. During the decrement phase, if the counter gets set to zero, then the object gets destroyed. Because the object is getting destroyed, it decrements the counter for all the objects that it uses. If any of those objects' counters reaches zero, then they are destroyed. This chain reaction destroys all objects that are no longer used....or so you think.
Advantages:
Disadvantages
Advantages:
- Objects are destroyed as soon as they fall out of scope. This means the program does not consume more memory than the program needs
- Heap sizes grow and shrink based on the programs need
- Garbage Collection runs in constant time, since you don't have to search for objects to destroy
- No stop-the-world garbage collection cycles
Disadvantages
- Cyclic pointers are not supported. Imaging the scenario where you have object A that points to object B. Then, imaging object B points back to object A. The counters for both objects will never reach zero since they will always have someone pointing to them (each other). This is called a memory leak
- History Note: This is why older versions of Internet Explorer had a tendency to consume all of your memory. The DOM tree and the javascript object tree were managed with reference counting
- The counters require synchronization, so there is a performance penalty for all counter changes in a multi-threaded environment
- Allocating memory still requires searching for an empty spot large enough to hold your object, so allocating memory takes longer
- Because of reason #3, sequentially created objects tend to not be local to each other
- Commonly referred as Swiss Cheese Memory!
- Due to non-locality, there is an increase in cache misses
Tuesday, November 6, 2012
MK802: BubbleUPnP
Although BubbleUPnp seemed like a good DLNA app when I launched it, I quickly learned it was not for me. The biggest problem I found right off the bad was the in-effectiveness of the Enter and Back buttons on my remote control. I was able to use the Up and Down buttons to pick a folder, but when I hit the Enter button to go into the folder, it did not work. The folder glowed for a second, and I was stuck there. I had to put the remote control into mouse mode and left click the folder. Since this is critical for me, I decided to keep hitting the Back button until I was back into the Play store so that I could uninstall it. Surprise number two was that the Back button would not let me exit the program. It took me to the parent folder, but once I was on the app's home screen, I had to use the mouse to pull up the settings menu and select "Exit". I have uninstalled the app.
Monday, November 5, 2012
USB Crash
One of the USB buses has crashed on my computer! Devices can still get power, but nothing I plug in gets recognized by the computer. I had 8 devices plugged in, but most of them were only for charging. Only 3 devices were transmitting data. Luckily two USB ports still work.
Friday, November 2, 2012
Missing a Terabyte
I'm a bit old school. I am sad to say that I have been using fdisk for half of my life. I can count on one hand how many times I have used a graphical program to partition a hard disk. Imagine my surprise when I realized my 3TB hard disk only had 2TB free! It turns out MBR partition tables don't support hard disks larger than 2TB. To partition a bigger hard disk, you have to use a GUID Partition Table (GPT). According to the Wikipedia article, fdisk doesn't support GPT. I guess I have to learn a new partition tool.
Thursday, November 1, 2012
What is garbage collection?
When the iPhone 5 came out, there was a resurgence of the iOS vs Android debate. In one of the comment sections of an article (I really have to stop reading the comments), an iPhone user argued that iOS is better because the Objective-C programs written use Automatic Reference Counting (ARC) instead of the "Garbage Collection" that Android uses. As a Computer Scientist, my head almost blew up.
I know what the person was talking about, but it irritates me when it is used in arguments. It may be nitpicking, but ARC is GC. Garbage Collection means the developer doesn't have to explicitly delete any memory that they create. It does not imply anything more than that. Reference Counting is one implementation of Garbage Collection. Mark-and-Sweep is another implementation. The person was trying to argue ARC vs Mark-and-Sweep, but didn't know the term Mark-and-Sweep, or Generational Garbage Collection. It was obvious the person didn't understand the changes in Java's Garbage Collection implementation since Java 1.1. That is not unreasonable. When teaching Garbage Collection algorithms, many courses stop at basic stop-the-world Mark-and-Sweep. All the advances in Java's GC implementation are variations of Mark-and-Sweep.
My main problem with the comments in this thread was the person claimed ARC had no negatives while "Garbage Collection" had no positives. The comments sounded like they came from a person that never had any formal Computer Science training and has never programmed for anything other an iOS. In future posts, I will talk about some of the misconceptions around Garbage Collection and go over the long lists of pros and cons for each implementation.
I know what the person was talking about, but it irritates me when it is used in arguments. It may be nitpicking, but ARC is GC. Garbage Collection means the developer doesn't have to explicitly delete any memory that they create. It does not imply anything more than that. Reference Counting is one implementation of Garbage Collection. Mark-and-Sweep is another implementation. The person was trying to argue ARC vs Mark-and-Sweep, but didn't know the term Mark-and-Sweep, or Generational Garbage Collection. It was obvious the person didn't understand the changes in Java's Garbage Collection implementation since Java 1.1. That is not unreasonable. When teaching Garbage Collection algorithms, many courses stop at basic stop-the-world Mark-and-Sweep. All the advances in Java's GC implementation are variations of Mark-and-Sweep.
My main problem with the comments in this thread was the person claimed ARC had no negatives while "Garbage Collection" had no positives. The comments sounded like they came from a person that never had any formal Computer Science training and has never programmed for anything other an iOS. In future posts, I will talk about some of the misconceptions around Garbage Collection and go over the long lists of pros and cons for each implementation.
Subscribe to:
Posts (Atom)