There are some general tips for secure programming. Check pointers. Validate input from users. Use parameter binding when performing SQL. Those are pretty obvious. Every day, more and more alerts go out about security vulnerabilities. In almost none of the alerts does the actual security hole get discussed. You sometimes see example exploiting code, but you almost never see example exploited code. How are software engineers supposed to learn from the mistakes of others? It seems like we have to repeat the mistakes of others for us to learn those lessons. I understand proprietary software vendors don't want to release the source code for their software and they fear that doing that would allow more people to hack their software, but how as an industry can we grow if the details are hidden from us?
Time and time again I hear about another company or another product being vulnerable to a major security flaw. Many of those flaws sound like they are preventable. The art of programming securely doesn't seem to exist. Blindly following guidelines doesn't always work. It often makes the problem worse. Developers need to be educated about writing secure code. I find it amazing that an industry that is so important isn't allowed to learn from its own mistakes. Everything is kept secret. Our industry will always be doomed to repeat itself as long as new developers are added to the workforce.
JS Ext
Tuesday, December 31, 2013
Monday, December 30, 2013
When GPS Attacks!
My mom recently purchased a house. She was so excited about her new home that she wanted to host Christmas. A few weeks later when Christmas came rolling around, I packed up my car and drove to her house. I used Waze as my navigation app. For most of the ride I followed the same path I always follow to visit my mom. After we got closer, that is when the GPS kicked in and started taking us down roads I was less familiar with. I blindly followed the GPS until we arrived at a house that looked surprisingly empty. The house was the right number, but the street was the wrong name. I fired up Google Maps and it wanted to send me to the same location. The township and zip code were correct. Even the neighborhood name was correct. I called my mom and she told me the intersection she lived on. Luckily, I was only 2 blocks away. Two minutes later and I was at my mom's new house.
After my brother gets there, he told me that he had meant to tell me not to use the address when using a GPS. He told me to use the cross street. Apparently all GPSes take you to the same wrong house. Even my mom's TomTom took her to the same wrong house. This would have been just a funny family story if a different topic didn't come up during dinner. The underwriter of the homeowner's insurance was cancelling my mom's policy and she doesn't know what to do. My brother is involved with that industry (home inspections) so he wanted to read the letter. The letter explained that they had driven by the house and saw that the roof was in bad shape. They would not insure my mom's house until she pays for a new roof and get the house re-inspected. My brother would never have let my mom buy a house with a bad roof. He would be able to recognize a bad roof as well, since he worked as a roofer while going to school. He goes out trying to figure out what is wrong. His only guess is that the underwriter didn't like the awning over the back porch. That is when I brought up the GPS fiasco.
This wouldn't be the first time a company caused harm due to GPS issues. Now my mom has to call up her insurance company and try to explain that there is a high probability the underwriter went to the wrong house! So much can be lost in translation. This underwriter is going to cause lots of stress and may cost my mom lots of money if they don't admit their mistake (if they made one). At this point, I don't know if the problem is the awning or the misplaced house, but I'm leaning towards a misplaced house.
After my brother gets there, he told me that he had meant to tell me not to use the address when using a GPS. He told me to use the cross street. Apparently all GPSes take you to the same wrong house. Even my mom's TomTom took her to the same wrong house. This would have been just a funny family story if a different topic didn't come up during dinner. The underwriter of the homeowner's insurance was cancelling my mom's policy and she doesn't know what to do. My brother is involved with that industry (home inspections) so he wanted to read the letter. The letter explained that they had driven by the house and saw that the roof was in bad shape. They would not insure my mom's house until she pays for a new roof and get the house re-inspected. My brother would never have let my mom buy a house with a bad roof. He would be able to recognize a bad roof as well, since he worked as a roofer while going to school. He goes out trying to figure out what is wrong. His only guess is that the underwriter didn't like the awning over the back porch. That is when I brought up the GPS fiasco.
This wouldn't be the first time a company caused harm due to GPS issues. Now my mom has to call up her insurance company and try to explain that there is a high probability the underwriter went to the wrong house! So much can be lost in translation. This underwriter is going to cause lots of stress and may cost my mom lots of money if they don't admit their mistake (if they made one). At this point, I don't know if the problem is the awning or the misplaced house, but I'm leaning towards a misplaced house.
Tuesday, December 24, 2013
Target Data Breach and the Lack of Journalism
I work pretty close to a Target. So close that my wife sends me Target to pick up baby supplies from time to time. Unlucky for me, my wife sent me on November 27th. That is the first day that credit cards started getting stolen from Target.
Like all Target customers, I want to know what happened. As an avid Krebs On Security reader, I knew I would find a great explanation on what happened there. For technical news that makes it to the mainstream, I like to compare what the mainstream says versus what the technical sites say. For issues like the Federal Affordable Care Act exchange, I found the mainstream to report on the opposite of what the technical sites reported on. For the Target data breach, I found that nobody was doing any actual investigation, except for Brian Krebs.
Every article I found about the data breach seemed to have similar wording as the various blog posts on Krebs. Some articles seemed to quote Krebs word for word but didn't actually put quotes (") around the words. All of them refer to Krebs but most of them never actually linked to the source material. When I finally pulled up Krebs, I was surprised how much more information was available in his posts versus what was in the mainstream media. He goes into detail about the discovery process and other interesting details.
I find it disappointing that the media doesn't link to Krebs'es website. The reason they call it the World Wide Web is because sites link to each other. Online news organizations don't want you to leave their website. They also can't rip off an entire expose by Krebs. I am fine with them giving minimal (dumbed down) information. When articles are posted about international relations, economics or California state law, I appreciate that the articles are dumbed down. This allows me to quickly get a general understanding of the topic. I wish they would post reference links so that I can easily drill down to learn more details for the topics. Some topics, like economics, I don't fully understand but I can understand the more technical publications. At first, I thought this desire was a result of reading too many academic papers. Then I remembered high school English class where I was taught to cite my sources.
As for the Krebs coverage of the data breachs, the part that I am most interested in was not covered: there was no detail on how the data breach occurred. This is not a surprise. The breach is too new. I'm sure one day in the future I will be reading the front page of Krebs and will see a post about how it happened.
Like all Target customers, I want to know what happened. As an avid Krebs On Security reader, I knew I would find a great explanation on what happened there. For technical news that makes it to the mainstream, I like to compare what the mainstream says versus what the technical sites say. For issues like the Federal Affordable Care Act exchange, I found the mainstream to report on the opposite of what the technical sites reported on. For the Target data breach, I found that nobody was doing any actual investigation, except for Brian Krebs.
Every article I found about the data breach seemed to have similar wording as the various blog posts on Krebs. Some articles seemed to quote Krebs word for word but didn't actually put quotes (") around the words. All of them refer to Krebs but most of them never actually linked to the source material. When I finally pulled up Krebs, I was surprised how much more information was available in his posts versus what was in the mainstream media. He goes into detail about the discovery process and other interesting details.
I find it disappointing that the media doesn't link to Krebs'es website. The reason they call it the World Wide Web is because sites link to each other. Online news organizations don't want you to leave their website. They also can't rip off an entire expose by Krebs. I am fine with them giving minimal (dumbed down) information. When articles are posted about international relations, economics or California state law, I appreciate that the articles are dumbed down. This allows me to quickly get a general understanding of the topic. I wish they would post reference links so that I can easily drill down to learn more details for the topics. Some topics, like economics, I don't fully understand but I can understand the more technical publications. At first, I thought this desire was a result of reading too many academic papers. Then I remembered high school English class where I was taught to cite my sources.
As for the Krebs coverage of the data breachs, the part that I am most interested in was not covered: there was no detail on how the data breach occurred. This is not a surprise. The breach is too new. I'm sure one day in the future I will be reading the front page of Krebs and will see a post about how it happened.
Sunday, December 22, 2013
Old Production Code Can Have Defects
I have been struggling lately with this idea that once code works in Prod, it is infallible. I have worked on a few defects recently where we identified bugs in code that is over 4 years old. The recommendation always included fixing the 4 year old defects, but management seems to dislike that. In their minds, if the code worked fine for 4 years, then we must have changed something that caused the issue. While true to a certain degree, the problem lies in the kind of defect that occurs. I have been seeing two types of "old" defects that manifested themselves recently.
The first type of defect is the good ol race condition. In my college multithreading class, I was taught that any possible path of execution is just that....possible. Therefore any possible path that is wrong must be eliminated. We had to prove that every possible execution path was correct. Just because you work to reduce the likelihood of a particular execution path, doesn't mean it will never occur. The reason for this is because of the decision making process that thread schedulers go through. Changes in your operation system, your hardware, your JVM version or even the code inside of the threads being scheduled could change the order in which lines of code get executed. This means you must eliminate race conditions because they could cause future problems! That is now how management thinks, though.
The second type of defect is one where the calling code changed in a way that triggers the defect. This is the most common scenario I have seen. Imagine a Set< String > in Java. A Set object contains a collection of String objects. The set is supposed to be unique. If you try to add the same string twice, then the second Set.add() is ignored. I saw an implementation of a Set-like container that claimed the same unique property. The problem was the add() method didn't honor that contract. You could add the same String as many times as you want. In this case, it wasn't a String, though. The developer required the api-user to enforce the unique property by checking the "key" property of all the objects in the set. It was a very large object, non-changing object (which is why it was being cached). An error was introduced in the call stack 10 frames up. This error caused the code 1 frame up to check check for a null key, but it still had the full correct object that it was adding in. This caused the code to call add() on the set 60k times. Obviously we ran out of memory.
In both of these scenarios, fix your code. I don't care how old it is. A defect is a defect.
The first type of defect is the good ol race condition. In my college multithreading class, I was taught that any possible path of execution is just that....possible. Therefore any possible path that is wrong must be eliminated. We had to prove that every possible execution path was correct. Just because you work to reduce the likelihood of a particular execution path, doesn't mean it will never occur. The reason for this is because of the decision making process that thread schedulers go through. Changes in your operation system, your hardware, your JVM version or even the code inside of the threads being scheduled could change the order in which lines of code get executed. This means you must eliminate race conditions because they could cause future problems! That is now how management thinks, though.
The second type of defect is one where the calling code changed in a way that triggers the defect. This is the most common scenario I have seen. Imagine a Set< String > in Java. A Set object contains a collection of String objects. The set is supposed to be unique. If you try to add the same string twice, then the second Set.add() is ignored. I saw an implementation of a Set-like container that claimed the same unique property. The problem was the add() method didn't honor that contract. You could add the same String as many times as you want. In this case, it wasn't a String, though. The developer required the api-user to enforce the unique property by checking the "key" property of all the objects in the set. It was a very large object, non-changing object (which is why it was being cached). An error was introduced in the call stack 10 frames up. This error caused the code 1 frame up to check check for a null key, but it still had the full correct object that it was adding in. This caused the code to call add() on the set 60k times. Obviously we ran out of memory.
In both of these scenarios, fix your code. I don't care how old it is. A defect is a defect.
Thursday, December 19, 2013
Failure to Share Media in Person
I had some family over and my brother wanted to show everyone some videos he created. He had copies of the video on a Micro SD card. The problem is my entertainment center in my living room doesn't really support in-person sharing. I have two devices hooked up to my TV that could play media. The first is a Windows PC. This is my gaming PC. The PC isn't in my living room, though. It is in my crawlspace under the living room. I ran HDMI and USB cables from the crawlspace to the living room. Due to the length of the USB cables, they don't work well for data transfer. The USB cables are for game controllers. My brother tried plugging in his Samsung Note and the device wouldn't event recognize the USB connection.
My second device is my MK802 III. This is the primary display for my TV. I have MX Player Pro installed. I put in the Micro SD card, but I didn't know how to "mount" the card at the time. The only time I use the SD card slot is when I travel with the MK802. When you boot the MK802 with an SD card in it, the card gets mounted on boot. After everyone left (when there was less pressure to get it to work) I found the menu option to mount the SD card.
This drove my brother nuts. All this technology and it wasn't usable for him. We settled on plugging the SD card into my laptop, then copying the files over SMB and Wifi. These were HD video clips, though. Needless to say, it was going to take a while. The clips were available on Youtube, though. So, we fired up the Youtube app on the MK802 and immediately ran into issues. The problem was the MK802 was connecting over Wifi. While I do have 3 access points in my house, the laptop and MK802 were right next to the one access point. On top of that, some family members were on their phones and tablets browsing the internet. My Wifi bandwidth was exhausted.
We switched back to the gaming PC. My brother fired up IE to go to Youtube and started making fun of me more. Adobe Flash wasn't installed. He started saying how I never use the PC and I fired back saying who uses IE? Google Chrome was installed....and running....with Youtube already opened! He started playing videos in Chrome and noticed the taskbar wasn't going away when he put the video in fullscreen mode. I started to talk about how this was a bug in Windows 7 and he cut me off talking about how it doesn't happen to him.
I need to find a better way for people to come over and transfer video. I should plug an ethernet cable into my MK802. Since the MK802 is rooted, I should install a Samba server so that other guests can download the files while we are watching them. I also should do some testing to make sure I can plug in a USB Mass Storage device. It is much easier to plug in the SD card into a USB reader rather than the slot in the MK802. The MK802 is semi-hidden while the USB Hub that connects to it is more easily accessible.
My second device is my MK802 III. This is the primary display for my TV. I have MX Player Pro installed. I put in the Micro SD card, but I didn't know how to "mount" the card at the time. The only time I use the SD card slot is when I travel with the MK802. When you boot the MK802 with an SD card in it, the card gets mounted on boot. After everyone left (when there was less pressure to get it to work) I found the menu option to mount the SD card.
This drove my brother nuts. All this technology and it wasn't usable for him. We settled on plugging the SD card into my laptop, then copying the files over SMB and Wifi. These were HD video clips, though. Needless to say, it was going to take a while. The clips were available on Youtube, though. So, we fired up the Youtube app on the MK802 and immediately ran into issues. The problem was the MK802 was connecting over Wifi. While I do have 3 access points in my house, the laptop and MK802 were right next to the one access point. On top of that, some family members were on their phones and tablets browsing the internet. My Wifi bandwidth was exhausted.
We switched back to the gaming PC. My brother fired up IE to go to Youtube and started making fun of me more. Adobe Flash wasn't installed. He started saying how I never use the PC and I fired back saying who uses IE? Google Chrome was installed....and running....with Youtube already opened! He started playing videos in Chrome and noticed the taskbar wasn't going away when he put the video in fullscreen mode. I started to talk about how this was a bug in Windows 7 and he cut me off talking about how it doesn't happen to him.
I need to find a better way for people to come over and transfer video. I should plug an ethernet cable into my MK802. Since the MK802 is rooted, I should install a Samba server so that other guests can download the files while we are watching them. I also should do some testing to make sure I can plug in a USB Mass Storage device. It is much easier to plug in the SD card into a USB reader rather than the slot in the MK802. The MK802 is semi-hidden while the USB Hub that connects to it is more easily accessible.
Monday, December 16, 2013
Thread Saftey: Protect your Lock
Java provides the ability to lock on yourself. You can do this using the synchronized keyword as part of a method definition. Many programmers don't use that form of the keyword, though. You may have seen this code before:
There are two points to having a private lock like this. For both scenarios, imagine if you synchronized on the method instead of the lock.
1) If a developer extends MyObject, then any methods that they write that synchronize on the method will block your methods as well. Both codes are critical sections of the same lock. This could either be a good thing or a bad thing. If the developer extending MyObject doesn't look at (or doesn't have access to) the source code, then it could be a really bad thing. Deadlocks of occured doing this. You can also run into performance issues. If your class is senative to those types of issues, then I highly recommend making the object final, regardless of the synchronization type you use. This prevents people from extending your class
2) A developer that is using MyObject can synchronize on an instance of the object. If the implementation of MyObject is
In my opinion, it is better to be educated so that you can make the right decision for your scenario. The problem with a 'private' lock is you no longer let users of your api know that a method is synchronized. JavaDoc and various auto-complete technologies let the user know that a method is synchronized. They use the method keyword to determine that. Over use of the private lock pattern leads to a bunch of synchronized methods that are documented as not synchronized.
public class MyObject { private final Object lock = new Object(); public void someMethod() { synchronize ( lock ) { // actually do something } } }
There are two points to having a private lock like this. For both scenarios, imagine if you synchronized on the method instead of the lock.
1) If a developer extends MyObject, then any methods that they write that synchronize on the method will block your methods as well. Both codes are critical sections of the same lock. This could either be a good thing or a bad thing. If the developer extending MyObject doesn't look at (or doesn't have access to) the source code, then it could be a really bad thing. Deadlocks of occured doing this. You can also run into performance issues. If your class is senative to those types of issues, then I highly recommend making the object final, regardless of the synchronization type you use. This prevents people from extending your class
2) A developer that is using MyObject can synchronize on an instance of the object. If the implementation of MyObject is
In my opinion, it is better to be educated so that you can make the right decision for your scenario. The problem with a 'private' lock is you no longer let users of your api know that a method is synchronized. JavaDoc and various auto-complete technologies let the user know that a method is synchronized. They use the method keyword to determine that. Over use of the private lock pattern leads to a bunch of synchronized methods that are documented as not synchronized.
Thursday, December 12, 2013
Thread Safety: Beware of Java's Hashtable, Vector and Collections.synchronize*()
I am shocked at how bad people are at multithreaded programming. I took one class in college on the topic, and that was all I needed. That one class gave me far more knowledge than most programmers I know. This isn't a statement about how good I am....this is a statement about the poor state of programmers today.
One of the first mistakes I see is just throwing a Collections.synchronize*() call around a data structure or using the synchronized data structures. While very handy, these wrappers give a false sense of security. Consider the following code:
This class makes use of a synchornized wrapper and a synchronized data structure (Hashtable). Therefore, access to the internals of 'list' and 'map' are protected and thread-safe. You know what isn't threadsafe? MySet! Imagine two threads running at the same time that are both calling add(). Below is a sequence of what line each thread is on. You will see Thread B stall 3 times waiting for Thread A to finish with the easy-to-use wrappers.
A: MySet.add()
B: MySet.add()
A: if ( !map.containsKey( obj.getId() ) ) {
B: if ( !map.containsKey( obj.getId() ) ) { // B must wait for A to finish with 'map' before it starts
A: list.add( obj );
B: list.add( obj ); // B must wait for A to finish with 'list' before it starts
A: map.put( obj.getId(), obj );
B: map.put( obj.getId(), obj ); // B must wait for A to finish with 'map' before it starts
At the end of the execution, 'list' will have 2 entries in it while 'map' will only have 1. Congradulations! You have know written a race condition. On top of that, your add() method now has 3 critical sections!
The correct way to write the synchronization should look closer to this: (I say closer because it doesn't HAVE to look like this)
In the updated code, we create a singe lock that protects both data structures. Then, we use the non-thread safe versions of the data structures. In this case, Thread B must wait for Thread A to completely finish the add() method before it can do the first check. This code correct. Correct in this context means there is no order that Thread's A and B can run in that would violate the internal invarience of the object. It is impossible for 'list' and 'map' to get out of sync with each other. It gets even better. Since there is only one critical section, the code actually runs faster!
One of the first mistakes I see is just throwing a Collections.synchronize*() call around a data structure or using the synchronized data structures. While very handy, these wrappers give a false sense of security. Consider the following code:
public class MySet { private final List list = Collections.synchronizeList( new ArrayList( 10 ) ); private final Map map = new Hashtable(); public void add( MyObject obj ) { if ( !map.containsKey( obj.getId() ) ) { list.add( obj ); map.put( obj.getId(), obj ); } } }
This class makes use of a synchornized wrapper and a synchronized data structure (Hashtable). Therefore, access to the internals of 'list' and 'map' are protected and thread-safe. You know what isn't threadsafe? MySet! Imagine two threads running at the same time that are both calling add(). Below is a sequence of what line each thread is on. You will see Thread B stall 3 times waiting for Thread A to finish with the easy-to-use wrappers.
A: MySet.add()
B: MySet.add()
A: if ( !map.containsKey( obj.getId() ) ) {
B: if ( !map.containsKey( obj.getId() ) ) { // B must wait for A to finish with 'map' before it starts
A: list.add( obj );
B: list.add( obj ); // B must wait for A to finish with 'list' before it starts
A: map.put( obj.getId(), obj );
B: map.put( obj.getId(), obj ); // B must wait for A to finish with 'map' before it starts
At the end of the execution, 'list' will have 2 entries in it while 'map' will only have 1. Congradulations! You have know written a race condition. On top of that, your add() method now has 3 critical sections!
The correct way to write the synchronization should look closer to this: (I say closer because it doesn't HAVE to look like this)
public class MySet { private final List list = new new ArrayList( 10 ); private final Map map = new HashMap( 64 ); private final Object lock = new Object(); // I will explain this in a future post public void add( MyObject obj ) { synchronize( lock ) { // lock only once, do your business, then get out if ( !map.containsKey( obj.getId() ) ) { list.add( obj ); map.put( obj.getId(), obj ); } } } }
In the updated code, we create a singe lock that protects both data structures. Then, we use the non-thread safe versions of the data structures. In this case, Thread B must wait for Thread A to completely finish the add() method before it can do the first check. This code correct. Correct in this context means there is no order that Thread's A and B can run in that would violate the internal invarience of the object. It is impossible for 'list' and 'map' to get out of sync with each other. It gets even better. Since there is only one critical section, the code actually runs faster!
Wednesday, December 11, 2013
XSLT-based replacement for JSPs
Every time I start a new project, I like to throw in something new and unique. I experiment with using new technology or mixing old and new technology. Sometimes the experiment is a complete failure. Sometimes the experiment leads to a pattern that I start using over and over again.
For this project, I decided to try using an alternative to JSPs. I wanted to replicate some of the features that JSPs support, though. The main three features that I wanted were support for some sort of tag library, support for reading information from some sort of Java bean and support for basic flow control like loops. There were a few things that JSPs didn't support that I wanted as well. I also wanted to support unit testing with mock support and support for reading all files out of the classpath as opposed to the filesystem. I want the app to get deployed as a "bag of jars" instead of a war file that requires Tomcat to be setup.
The System
The system I came up with is merges multiple technologies together, but the main technologies are XML and XSLT. To understand the system, lets take a single GET request and decompose the main steps that occur.
GET /html/index.html
Step 1 - Create the page-specific bean
Every page gets a bean class and bean factory. For the index.html file, the bean class would be IndexBean.java and the factory would be IndexBeanFactory.java. The rendering engine would use reflection to instantiate a new IndexBeanFactory class. This factory would then create an instance of the IndexBean class using the request object. The factory would make any midtier calls that are needed, then call setters on the bean. The bean acts as a simple container for data. In the JSF world, the bean mimics a request-scoped ManagedBean.
Step 2 - Serialize the page-specific bean to XML
I found a really great Java library for serialization to XML. The library is called Simple. It is really easy to use. In our example, the render engine takes the IndexBean instance that was created by the factory and serializes it to XML. Now we have an XML document that represents all the data that would be visible on the page.
Step 3 - Transform the Bean XML to HTML and Custom Tags (Intermediate XML)
This is where the "replacement" of the JSP file comes in. In this example, the developer would write index.xsl instead of index.jsp. This XSL file contains a mix of HTML and Custom tags. You can use XPath and the <xsl:*> namespace tags to do logic, like loops. As a devloper, you are transforming the page bean xml to HTML. It isn't 100% HTML, though. The goal is for the "syntax" of this "visual code file" to be easy to learn for JSP/JSTL/Facelets developers. When you want an HTML tag, you just put an HTML tag. If you have a "tag library", you can use those tags. This requires developers to learn XSLT, but as I have pointed out previously, every developer lists XSLT regardless of their experience with it.
Step 4 - Transform the Intermediate XML to HTML
While the Intermediate XML has HTML tags, it also has Custom tags. The final step in the process is to perform an XSL transformation to translate the Custom tags to HTML tags. This is where your Tag Library developers come in. They maintain their own XSL file that contains all the custom tags. You can even import multiple tag libraries together if you have multiple teams doing this. This step is what allows UI developers to use reusable code in their pages.
In this example, you end up with an HTML file from an XSL file. Although a bit awkward at first, it should be easy to pick up. For developers, they are only using technology that they should already know! Your tag library developers stay in XSL/HTML land. No more Java code. Now, lets talk about some of the goals again.
Tag Library
This system supports having a tag library. Developers use the custom tags in Step 3 while the tags are implemented in Step 4. My app makes use of JQueryMobile. For every component that I use, I have created a custom tag for it. This makes it fast for me to write pages, since I don't need to copy and paste the HTML for a ListView over and over again. The tag library is even segregated, so if you had a special team that owned this file, they are not sharing the file with the developers. These xsl files are read from the classpath, so they can even be bundled with different jar files.
Reading from Java Beans
In the JSP and Facelets world, developers use Expression Language (EL) to access various Java beans. With this technology, we serialize a single bean (tree) into XML. This allows us to use XPath to read from the "Java Bean". Although we are technically reading from XML instead of the bean, it can conceptually be thought of as reading directly from the bean. This is implemented in Step 2.
Flow Control
Flow control that simulates JSTL is implemented in Step 3. Developers use standard XSLT tags like <xsl:forEach> to do flow control. These contructs are not new. Developers should be semi-familiar with it. The learning curve should be pretty minimal.
Testing with Mock support
Every step in the pipeline can be unit tested. These previous step in the pipeline can be mocked to perform a true unit test. You can even integration test the entire pipeline by mocking out the midtier calls. You can use XmlUnit to to perform an XPath assertion. You use Test Driven Development on your UI Layer! You can unit test your bean factory and xsl page seperately. You can unit test without starting Tomcat!
Reading from the Classpath
Xsl files are read from the classpath, not the filesystem. This means you put them in src/main/resources for Maven projects. In Debug mode, you can modify the xsl files and hit refresh in the browser. You can TDD the xsl files using JUnit. You can split out different parts of your website into different jars (rcs repositories) so that different teams don't conflict with each other.
Android Support
All of this technology uses either simple 3rd party Java libraries or parts of the JRE. Most of the 3rd party libraries advertise the fact that they work on Android. While I am using a servlet engine, that servlet engine (TJWS) advertises the fact that it works on Android. On top of that, there is nothing in the design that forces it to work with a servlet engine. In my current implementation, the factory object gets an HttpServletRequest object passed in. Switching that out with a generic "request" bean would completely decouple the system from a servlet engine. Alternatively, you might be able to implement the HttpServletRequest interface in a way that wraps the native request object for your webserver. Either way, you now have a UI technology that works on Android.
Ease of Development
Another feature I implemented that makes development easier is segragation of the pipeline. You can request the outputs of steps 2 through 4 just by changing the url. In the example above, we requested index.html. This gives you the output of Step 4. If you were to request index.xml, then you get the output of Step 2: the serialized XML of the Java Bean. If you were to request index.phtml, then you get the Intermediate XML that contains a mix of HTML and Custom tags from Step 3. A developer can hit Step 2 in the browser to see what the XML structure looks like so that they can write the XPath expressions correctly. Tag developers can look at Step 3 to see the before and after of their Tag transfomations.
Internationalization
Step 4 also performs simple variable replacement out of a ResourceBundle. This allows built in internationalization that follows Java's best practices. Java and Android tend to use different internationalization systems, though.
Performance Considerations
One thing to keep in mind is the scalability of your system. One thing that JSP has over this system is performance. JSPs get translated into Java code which gets compiled to class files. The tag libraries are implemented in Java, so they are pre-compiled into class files. The class files are JIT-compiled into (hopefully) optimized assembly. This means a JSP-based system will perform better than this system. On the other hand, these performance implications are the same for Facelets, so if you are using Facelets instead of JSPs, you shouldn't see much of a performance impact due to the XSLT transformation.
For this project, I decided to try using an alternative to JSPs. I wanted to replicate some of the features that JSPs support, though. The main three features that I wanted were support for some sort of tag library, support for reading information from some sort of Java bean and support for basic flow control like loops. There were a few things that JSPs didn't support that I wanted as well. I also wanted to support unit testing with mock support and support for reading all files out of the classpath as opposed to the filesystem. I want the app to get deployed as a "bag of jars" instead of a war file that requires Tomcat to be setup.
The System
The system I came up with is merges multiple technologies together, but the main technologies are XML and XSLT. To understand the system, lets take a single GET request and decompose the main steps that occur.
GET /html/index.html
Step 1 - Create the page-specific bean
Every page gets a bean class and bean factory. For the index.html file, the bean class would be IndexBean.java and the factory would be IndexBeanFactory.java. The rendering engine would use reflection to instantiate a new IndexBeanFactory class. This factory would then create an instance of the IndexBean class using the request object. The factory would make any midtier calls that are needed, then call setters on the bean. The bean acts as a simple container for data. In the JSF world, the bean mimics a request-scoped ManagedBean.
Step 2 - Serialize the page-specific bean to XML
I found a really great Java library for serialization to XML. The library is called Simple. It is really easy to use. In our example, the render engine takes the IndexBean instance that was created by the factory and serializes it to XML. Now we have an XML document that represents all the data that would be visible on the page.
Step 3 - Transform the Bean XML to HTML and Custom Tags (Intermediate XML)
This is where the "replacement" of the JSP file comes in. In this example, the developer would write index.xsl instead of index.jsp. This XSL file contains a mix of HTML and Custom tags. You can use XPath and the <xsl:*> namespace tags to do logic, like loops. As a devloper, you are transforming the page bean xml to HTML. It isn't 100% HTML, though. The goal is for the "syntax" of this "visual code file" to be easy to learn for JSP/JSTL/Facelets developers. When you want an HTML tag, you just put an HTML tag. If you have a "tag library", you can use those tags. This requires developers to learn XSLT, but as I have pointed out previously, every developer lists XSLT regardless of their experience with it.
Step 4 - Transform the Intermediate XML to HTML
While the Intermediate XML has HTML tags, it also has Custom tags. The final step in the process is to perform an XSL transformation to translate the Custom tags to HTML tags. This is where your Tag Library developers come in. They maintain their own XSL file that contains all the custom tags. You can even import multiple tag libraries together if you have multiple teams doing this. This step is what allows UI developers to use reusable code in their pages.
In this example, you end up with an HTML file from an XSL file. Although a bit awkward at first, it should be easy to pick up. For developers, they are only using technology that they should already know! Your tag library developers stay in XSL/HTML land. No more Java code. Now, lets talk about some of the goals again.
Tag Library
This system supports having a tag library. Developers use the custom tags in Step 3 while the tags are implemented in Step 4. My app makes use of JQueryMobile. For every component that I use, I have created a custom tag for it. This makes it fast for me to write pages, since I don't need to copy and paste the HTML for a ListView over and over again. The tag library is even segregated, so if you had a special team that owned this file, they are not sharing the file with the developers. These xsl files are read from the classpath, so they can even be bundled with different jar files.
Reading from Java Beans
In the JSP and Facelets world, developers use Expression Language (EL) to access various Java beans. With this technology, we serialize a single bean (tree) into XML. This allows us to use XPath to read from the "Java Bean". Although we are technically reading from XML instead of the bean, it can conceptually be thought of as reading directly from the bean. This is implemented in Step 2.
Flow Control
Flow control that simulates JSTL is implemented in Step 3. Developers use standard XSLT tags like <xsl:forEach> to do flow control. These contructs are not new. Developers should be semi-familiar with it. The learning curve should be pretty minimal.
Testing with Mock support
Every step in the pipeline can be unit tested. These previous step in the pipeline can be mocked to perform a true unit test. You can even integration test the entire pipeline by mocking out the midtier calls. You can use XmlUnit to to perform an XPath assertion. You use Test Driven Development on your UI Layer! You can unit test your bean factory and xsl page seperately. You can unit test without starting Tomcat!
Reading from the Classpath
Xsl files are read from the classpath, not the filesystem. This means you put them in src/main/resources for Maven projects. In Debug mode, you can modify the xsl files and hit refresh in the browser. You can TDD the xsl files using JUnit. You can split out different parts of your website into different jars (rcs repositories) so that different teams don't conflict with each other.
Android Support
All of this technology uses either simple 3rd party Java libraries or parts of the JRE. Most of the 3rd party libraries advertise the fact that they work on Android. While I am using a servlet engine, that servlet engine (TJWS) advertises the fact that it works on Android. On top of that, there is nothing in the design that forces it to work with a servlet engine. In my current implementation, the factory object gets an HttpServletRequest object passed in. Switching that out with a generic "request" bean would completely decouple the system from a servlet engine. Alternatively, you might be able to implement the HttpServletRequest interface in a way that wraps the native request object for your webserver. Either way, you now have a UI technology that works on Android.
Ease of Development
Another feature I implemented that makes development easier is segragation of the pipeline. You can request the outputs of steps 2 through 4 just by changing the url. In the example above, we requested index.html. This gives you the output of Step 4. If you were to request index.xml, then you get the output of Step 2: the serialized XML of the Java Bean. If you were to request index.phtml, then you get the Intermediate XML that contains a mix of HTML and Custom tags from Step 3. A developer can hit Step 2 in the browser to see what the XML structure looks like so that they can write the XPath expressions correctly. Tag developers can look at Step 3 to see the before and after of their Tag transfomations.
Internationalization
Step 4 also performs simple variable replacement out of a ResourceBundle. This allows built in internationalization that follows Java's best practices. Java and Android tend to use different internationalization systems, though.
Performance Considerations
One thing to keep in mind is the scalability of your system. One thing that JSP has over this system is performance. JSPs get translated into Java code which gets compiled to class files. The tag libraries are implemented in Java, so they are pre-compiled into class files. The class files are JIT-compiled into (hopefully) optimized assembly. This means a JSP-based system will perform better than this system. On the other hand, these performance implications are the same for Facelets, so if you are using Facelets instead of JSPs, you shouldn't see much of a performance impact due to the XSLT transformation.
Wednesday, December 4, 2013
Ubuntu Tablet PC (Toshiba Satellite u925t)
Ubuntu installed pretty easily on my Toshiba Satellite u925t. I loaded Ubuntu 13.10 onto a USB stick and it installed without any issues. The laptop wasn't as aggressive with Secure Boot as my Acer laptop was. All the hardware that I use worked out of the box. I tested Wifi, the touchscreen, the mouse pad and the camera. I did not test Bluetooth, but I do see the icon on my system tray.
While the touchscreen is pretty accurate, I did run through a bunch of customizations to increase the sizes of tool bars and other things. Having larger handles is just easier. I use Onboard as my on screen keyboard. I launch the keyboard on startup. I keep it hidden, but I allow it to pop up when it detects text input. I also keep the touchable icon on the bottom left of the screen. I use the "Small" layout, which is the closest to what I want. I wish the hide button was a regular key in that layout though. I don't always want the keyboard to show up and I found it annoying to long-press the Enter key to bring up another popup to hide the keyboard. I have not figured out a way to configure Onboard to not show the keyboard when I'm in laptop mode.
I installed Google Chrome, but I found that Mozilla Firefox is much more touch friendly. Google Chrome strives to have the same look and feel across all platforms. This means it ignores the size changes I made to make the toolbars and scrollbars larger. I installed a few Firefox Addons that allowed me to scroll using a drag gesture and to long-click to open in a new tab (there is no way to right-click in tablet mode and the u925t's touchpad does not right-click very well). As always, Flash blockers are essential to preserve our battery life when browsing the internet. I installed Genymotion to allow me to run Android apps. I will dive deeper into Android-on-Linux in a future post, but overall it runs ok.
I configured the power button to put the laptop into suspend mode when pressed. This makes the laptop feel more like a tablet. I increased the Unity Panel size to 54 pixels. I haven't figured out a way to right click while in tablet mode, though.
Overall, I have very happy with this setup. I spend most of my time in tablet mode. I go to laptop mode when I blog or when I program. I believe that this style of operating system is the future.
While the touchscreen is pretty accurate, I did run through a bunch of customizations to increase the sizes of tool bars and other things. Having larger handles is just easier. I use Onboard as my on screen keyboard. I launch the keyboard on startup. I keep it hidden, but I allow it to pop up when it detects text input. I also keep the touchable icon on the bottom left of the screen. I use the "Small" layout, which is the closest to what I want. I wish the hide button was a regular key in that layout though. I don't always want the keyboard to show up and I found it annoying to long-press the Enter key to bring up another popup to hide the keyboard. I have not figured out a way to configure Onboard to not show the keyboard when I'm in laptop mode.
I installed Google Chrome, but I found that Mozilla Firefox is much more touch friendly. Google Chrome strives to have the same look and feel across all platforms. This means it ignores the size changes I made to make the toolbars and scrollbars larger. I installed a few Firefox Addons that allowed me to scroll using a drag gesture and to long-click to open in a new tab (there is no way to right-click in tablet mode and the u925t's touchpad does not right-click very well). As always, Flash blockers are essential to preserve our battery life when browsing the internet. I installed Genymotion to allow me to run Android apps. I will dive deeper into Android-on-Linux in a future post, but overall it runs ok.
I configured the power button to put the laptop into suspend mode when pressed. This makes the laptop feel more like a tablet. I increased the Unity Panel size to 54 pixels. I haven't figured out a way to right click while in tablet mode, though.
Overall, I have very happy with this setup. I spend most of my time in tablet mode. I go to laptop mode when I blog or when I program. I believe that this style of operating system is the future.
Sunday, December 1, 2013
Feature Request: Optional Permissions in Android
Imagine yourself as an Android developer for a bank....lets call it JeffBank. Banks usually have two types of customers: people with deposit (checking/saving/CD) accounts and people with loans. Now, JeffBank wants to roll out a mobile app. You are writing screens for two different lines of business. This isn't that abnormal. It happens all the time. When someone logs in, you get one of three different experiences: deposit, loan and both. You roll out the app and the negative reviews start piling in.
You start reading and you realize people are complaining about permissions. You see angy comments questioning the need for access to the camera. You see acusations of being involved with the NSA because you have access to read the GPS location. One thing I learned from being an Android developer is that Android users do look at the permissions an app uses, and are very vocal when they feel you are doing something "shady".
This problem stems from the fact that your loan users are getting a deposit account app as well as a loan app. Users tend to think of themselves, and not about the company they are doing business with. While a bank is a little more obvious than other types of split personality companies, your loan users still don't understand why you need a camera or GPS for making a loan payment. You can try to calm them down, but you don't always know who they are because they never actually installed the app! You can't tell them that if you had a checking account, you could deposit a check by taking a picture of it, or that if you needed an ATM, the app would allow you to find the closest ATM. Even if you try, would your loan customers even listen to you?
What Android needs is an optional permission system. When a user tries to install an app, they should see a list of required and optional permissions. For the optional permissions, the user would have the option to revoking those permissions on install or after the fact. This allows JeffBank to make a mobile app and not piss off the loan customers while still providing features to the deposit customers. Developers would still have the option for required permissions, like internet access. It would be silly for a bank app to flag that as optional.
Android 4.3 introduced a hidden screen that allows you to turn off some permissions for apps, but there is no telling how the app will respond to permissions being turned off. Current developers aren't developing for that. If you turn off any permission and it breaks, then the developer is just going to tell the user "well, don't do that!" Right now, all permissions are required. By giving that power to the developers as opposed to the users, you allow the developers to do if-checks to make sure the permission has been granted. If a deposit user has no plans on using the ATM locator feature, they can disable the GPS for the app. The mobile app could detect that it doesn't have permission to read the GPS location and can hide the menu option, or show it, but have an error message directing them to give the app access to GPS.
Overall, this feature would create more trust between mobile users and mobile developers. It would cut down on hate-reviews asking why JeffBank is trying to take naked pictures of them.
You start reading and you realize people are complaining about permissions. You see angy comments questioning the need for access to the camera. You see acusations of being involved with the NSA because you have access to read the GPS location. One thing I learned from being an Android developer is that Android users do look at the permissions an app uses, and are very vocal when they feel you are doing something "shady".
This problem stems from the fact that your loan users are getting a deposit account app as well as a loan app. Users tend to think of themselves, and not about the company they are doing business with. While a bank is a little more obvious than other types of split personality companies, your loan users still don't understand why you need a camera or GPS for making a loan payment. You can try to calm them down, but you don't always know who they are because they never actually installed the app! You can't tell them that if you had a checking account, you could deposit a check by taking a picture of it, or that if you needed an ATM, the app would allow you to find the closest ATM. Even if you try, would your loan customers even listen to you?
What Android needs is an optional permission system. When a user tries to install an app, they should see a list of required and optional permissions. For the optional permissions, the user would have the option to revoking those permissions on install or after the fact. This allows JeffBank to make a mobile app and not piss off the loan customers while still providing features to the deposit customers. Developers would still have the option for required permissions, like internet access. It would be silly for a bank app to flag that as optional.
Android 4.3 introduced a hidden screen that allows you to turn off some permissions for apps, but there is no telling how the app will respond to permissions being turned off. Current developers aren't developing for that. If you turn off any permission and it breaks, then the developer is just going to tell the user "well, don't do that!" Right now, all permissions are required. By giving that power to the developers as opposed to the users, you allow the developers to do if-checks to make sure the permission has been granted. If a deposit user has no plans on using the ATM locator feature, they can disable the GPS for the app. The mobile app could detect that it doesn't have permission to read the GPS location and can hide the menu option, or show it, but have an error message directing them to give the app access to GPS.
Overall, this feature would create more trust between mobile users and mobile developers. It would cut down on hate-reviews asking why JeffBank is trying to take naked pictures of them.
Monday, November 25, 2013
Toshiba Satellite u925t
My Acer laptop died (after only one year!), so I was in the market for a new laptop. I decided to keep an eye out for a replacement for my wife's Toshiba netbook. Her netbook had an unfortunate disagreement with the floor. The device ran fine.....the screen was just cracked. I was originally going to wait for Black Friday/Cyber Monday deals to come out, but Woot had a one day sale on a refurbished Toshiba Satellite u925t. I did some googling and watched some Youtube videos and decided it would work for both of us, so I bought two.
I won't go into specs, since you can get that from any website/review. Here are a few things to consider that the specs don't always get into. The screen is on a unique track design that won't allow it to close like a standard clamshell. Instead of closing it, you lay it all the way back. Once it is all the way back, it slides over the keyboard. This unique design means the keyboard is completely hidden when in tablet mode (which I like a lot) but also means the screen is always showing (which is bad when putting it in backpack). The screen is Gorilla Glass, but you will still want to buy some sort of mini-laptop case before putting it into an actual laptop bag.
I don't benchmark my hardware, so I don't have any raw numbers, but the performance seemed fine. I tend to run hardware a lot harder than most non-IT people, but I also run Linux so the performance is always better. Subjectively, the performance seems fine for what I am doing. My wife kept the Windows 8 install while I immediately installed Ubuntu. In a future post I will dive into detail on how Ubuntu works on a Tablet PC.
What surprised me the most was how much both my wife and I leave the laptop in tablet mode. The on screen keyboards for both operating systems work really well. The only "every day" tasks that I use laptop mode for is programming and blogging. Everything else that I do using the on screen keyboard. Since my emails tend to be a lot shorter, I will use the on screen keyboard for email. Most websites work perfectly fine in tablet mode. Although my wife was not happy about being forced to use Windows 8, she has come around. Windows 8 on a Tablet PC is far better than Windows 8 on a laptop.
The touchpad is actually pretty annoying. To save space, almost the entire surface of the touchpad can be used to move the mouse. That includes the space on top of the buttons. This means you regularly move the mouse while trying to click on something. This gets annoying really fast. If you look/feel around the touchpad, you will see/feel a horizontal line across the very bottom of the pad. That line is the bottom of the "touch area" of the track pad. If you use the tip of your finger, you can left-click without moving the mouse. Another thing that is annoying about this design is you can't keep your finger on your left mouse button. I have a habit of keeping my finger on the bottom while moving my mouse. If you do that, you will find that the mouse doesn't move at all. The touchpad disables the mouse movement when you have two fingers on the touchpad. Rick click is also frustrating. When you try to right click after positioning the mouse, you will often get a left click instead of a right click. This is because the mouse moved slightly causing the "tap" to register instead of the click. The tap is a left click. Since the tap happened first, you get a left click instead of a right click. I am a little more used to left and right clicking properly on this touchpad, but it isn't that much of an issue, since I use the tablet mode over 90% of the time.
We have only had the laptops for less than a month, but so far, we are very happy with them. I have played around a lot with the touch interface and will be posting more details about that.
I won't go into specs, since you can get that from any website/review. Here are a few things to consider that the specs don't always get into. The screen is on a unique track design that won't allow it to close like a standard clamshell. Instead of closing it, you lay it all the way back. Once it is all the way back, it slides over the keyboard. This unique design means the keyboard is completely hidden when in tablet mode (which I like a lot) but also means the screen is always showing (which is bad when putting it in backpack). The screen is Gorilla Glass, but you will still want to buy some sort of mini-laptop case before putting it into an actual laptop bag.
I don't benchmark my hardware, so I don't have any raw numbers, but the performance seemed fine. I tend to run hardware a lot harder than most non-IT people, but I also run Linux so the performance is always better. Subjectively, the performance seems fine for what I am doing. My wife kept the Windows 8 install while I immediately installed Ubuntu. In a future post I will dive into detail on how Ubuntu works on a Tablet PC.
What surprised me the most was how much both my wife and I leave the laptop in tablet mode. The on screen keyboards for both operating systems work really well. The only "every day" tasks that I use laptop mode for is programming and blogging. Everything else that I do using the on screen keyboard. Since my emails tend to be a lot shorter, I will use the on screen keyboard for email. Most websites work perfectly fine in tablet mode. Although my wife was not happy about being forced to use Windows 8, she has come around. Windows 8 on a Tablet PC is far better than Windows 8 on a laptop.
The touchpad is actually pretty annoying. To save space, almost the entire surface of the touchpad can be used to move the mouse. That includes the space on top of the buttons. This means you regularly move the mouse while trying to click on something. This gets annoying really fast. If you look/feel around the touchpad, you will see/feel a horizontal line across the very bottom of the pad. That line is the bottom of the "touch area" of the track pad. If you use the tip of your finger, you can left-click without moving the mouse. Another thing that is annoying about this design is you can't keep your finger on your left mouse button. I have a habit of keeping my finger on the bottom while moving my mouse. If you do that, you will find that the mouse doesn't move at all. The touchpad disables the mouse movement when you have two fingers on the touchpad. Rick click is also frustrating. When you try to right click after positioning the mouse, you will often get a left click instead of a right click. This is because the mouse moved slightly causing the "tap" to register instead of the click. The tap is a left click. Since the tap happened first, you get a left click instead of a right click. I am a little more used to left and right clicking properly on this touchpad, but it isn't that much of an issue, since I use the tablet mode over 90% of the time.
We have only had the laptops for less than a month, but so far, we are very happy with them. I have played around a lot with the touch interface and will be posting more details about that.
Thursday, November 21, 2013
Uninstall as Administrator....Seriously Microsoft?
I installed some software on my wife's Windows 7 netbook. The software didn't work out so I decided to uninstall it. That is when I had to fight the uphill battle of uninstalling the software. I got an error about not being an Administrator. This was weird since we never set up multiple accounts on my wife's netbook. There was only one user. Eventually I found an answer on Microsoft's site. Apparently you have to open up a command window and enable the administrator account. Once you do that, you can log into the administrator account to uninstall the program. After you do that, you can re-disable the administrator account. Seriously?
Wednesday, November 13, 2013
Technology can't win against Big Media
When Intel announced that it was putting together an Internet TV service, I was really excited. I really want Internet TV to kick off. I believe it is a new model of media that will lead to a revolution in how we look at content. It looks like that revolution won't happen any time soon.
I was looking forward to Intel's offering because Intel is a big name company. Thanks to the Blue Man Group, even grandmothers know who Intel is. I was sure that big media couldn't ignore such a recognizable brand. I was wrong. It appears that this technology giant is failing to deliver. It is failing not because of a technology limitation or due to bad programming or due to bad project management. The entire technology part is off the hook. It is failing because the current media distribution companies (your cable TV providers) have been actively trying to prevent Intel from acquiring media for its service.
You see, companies like Time Warner Cable make lots of money on crap. By crap, I mean 200 hundred channels with nothing but cats. It is worse than that though. The internet has proven that people love cats! With cable TV, you get hundreds of unorganized channels with no ability to watch on your own schedule. This is exactly what they want, since those money is made on a per-channel basis. So, if you only want 20 channels, but you are forced to get 200, you end up paying for the 200 channels. Profit!
The whole point of Internet TV is to reduce the channel concept and help you find things that you actually would enjoy. You like MythBusters but you don't like Ghost Hunters...you know (since you like real science) then the Internet TV service recommends shows and/or episodes that match your tastes. You might find a new show that is good but isn't paying a whole lot for advertising (imagine that!). You might find an episode of a show that you haven't liked in the passed, but you really enjoy that episode, so it causes you to give it a second look (my wife hated Bones the first time she saw it; now she loved it...pre jumping the shark).
Imagine a system that promotes shows without forcing them to pay large amounts of money for advertising. Imagine content producers having the ability to reach wide audiences without a lot of startup capital. Imagine a system where you sit down on the couch, and all the content that you want to watch is sitting in a queue for you to watch, finish, then move on to non-TV related activities.
Youtube and Netflix have a lot of these qualities and features. I was hoping Intel's service would be a superset of Youtube and Netflix features, plus the only thing that is really required to take on the content distributers.....content. It looks like the big media empire will continue to live on, though.
I was looking forward to Intel's offering because Intel is a big name company. Thanks to the Blue Man Group, even grandmothers know who Intel is. I was sure that big media couldn't ignore such a recognizable brand. I was wrong. It appears that this technology giant is failing to deliver. It is failing not because of a technology limitation or due to bad programming or due to bad project management. The entire technology part is off the hook. It is failing because the current media distribution companies (your cable TV providers) have been actively trying to prevent Intel from acquiring media for its service.
You see, companies like Time Warner Cable make lots of money on crap. By crap, I mean 200 hundred channels with nothing but cats. It is worse than that though. The internet has proven that people love cats! With cable TV, you get hundreds of unorganized channels with no ability to watch on your own schedule. This is exactly what they want, since those money is made on a per-channel basis. So, if you only want 20 channels, but you are forced to get 200, you end up paying for the 200 channels. Profit!
The whole point of Internet TV is to reduce the channel concept and help you find things that you actually would enjoy. You like MythBusters but you don't like Ghost Hunters...you know (since you like real science) then the Internet TV service recommends shows and/or episodes that match your tastes. You might find a new show that is good but isn't paying a whole lot for advertising (imagine that!). You might find an episode of a show that you haven't liked in the passed, but you really enjoy that episode, so it causes you to give it a second look (my wife hated Bones the first time she saw it; now she loved it...pre jumping the shark).
Imagine a system that promotes shows without forcing them to pay large amounts of money for advertising. Imagine content producers having the ability to reach wide audiences without a lot of startup capital. Imagine a system where you sit down on the couch, and all the content that you want to watch is sitting in a queue for you to watch, finish, then move on to non-TV related activities.
Youtube and Netflix have a lot of these qualities and features. I was hoping Intel's service would be a superset of Youtube and Netflix features, plus the only thing that is really required to take on the content distributers.....content. It looks like the big media empire will continue to live on, though.
Sunday, November 10, 2013
JSP Limitations
I have previously posted about the limitation of jsps where they can't be read out of the classpath. In that scenario, I had a library that wanted to have a simple user interface. I have a new scenario where I am using a tiny servlet container. I am writing server application that has a user interface. This server application is small enough that it technically can run on an Android device. The catch is jsps won't run on Android. This is because the JSP spec says jsp files are translated into java files, then compiled into class files. Android can't run class files. Android doesn't support the JSP spec.
On top of that, you still can't easily unit/integration test webapps that are implemented in jsps. Although some people have been able to run some sort of automated test case (using tools like Selenium), they usually start up a Tomcat instance as part of the build process. Now you have to worry about configuring Tomcat as part of your build process. You can also kiss mocking goodbye. Now, you are connecting to a real database.
These types of limitations can be pretty damaging in this day and age. The combination of HTML, custom JSP tags and JSLT inside of a JSP is really helpful when it comes to developing an HTML webpage. It just isn't as portable and testable as it used to be. The definition of "portable" has increased and developers are doing more automated testing.
I have tried a few times to invent technology that tries to solve problems like these. I have never come up with anything that does it in a very good way, though. Maybe the next generation of UI technology will solve these problems.
On top of that, you still can't easily unit/integration test webapps that are implemented in jsps. Although some people have been able to run some sort of automated test case (using tools like Selenium), they usually start up a Tomcat instance as part of the build process. Now you have to worry about configuring Tomcat as part of your build process. You can also kiss mocking goodbye. Now, you are connecting to a real database.
These types of limitations can be pretty damaging in this day and age. The combination of HTML, custom JSP tags and JSLT inside of a JSP is really helpful when it comes to developing an HTML webpage. It just isn't as portable and testable as it used to be. The definition of "portable" has increased and developers are doing more automated testing.
I have tried a few times to invent technology that tries to solve problems like these. I have never come up with anything that does it in a very good way, though. Maybe the next generation of UI technology will solve these problems.
Thursday, November 7, 2013
Minix Neo x7 didn't live up to the hype
In a previous post I talked about possibly buying the Minix Neo x7 as my next Android TV. This turned out to be a disaster. While some apps that I use on a regular basis did work (very well), others did not work. I will go through some of the apps that I tried on the x7. I tried the stock rom as well as various finless roms.
MX Player Pro - Works
MX Player Pro ran fine on the x7. I even installed a true 1080p kernel and it ran fine. This app is really important to me since my wife and I use it to watch our video collection.
Youtube - Works
The Youtube app worked fine. The interface did run noticable faster on the x7 than my MK802 III. Double the cores really did help here. There was less lag when moving between pages.
Netflix - Works
The Netflix app worked fine.
Skype - Does not work
For some weird reason, I was not able to log into Skype. The app launched, but I couldn't log in. Since I couldn't log in, I wasn't able to test if the camera I have actually worked.
Hulu Plus - Does not work
I tried various version of the Hulu Plus app and they all ran into the same problem. As soon as you tried to play video, the device crashed. The key word here is the device, not the app. My TV went blue, saying there was no signal coming from the HDMI cable. I had to unplug the device to force a restart to recover. This behavior was not exclusive to Hulu Plus.
CBS - Does not work
One of the driving factors behind upgrading to the x7 was the CBS app. While I knew the app wasn't officially supported, I was pretty sure it would run on most modern Android TVs. I was running out of "app space" on my MK802 III, though. The upgrade to the x7 was supported to give me more breathing room since more and more media apps were coming out.
Unfortunately the CBS app exhibited almost the same behavior as the Hulu Plus app, with one minor difference. While the Hulu Plus app crashed right away, the CBS app would crash after the first set of commercials played. I wonder if the crash is related to DRM.
ABC - Does not work
The CBS and ABC apps were released around the same time. While it would be nice to have the ABC app, it wasn't as important as the CBS app. Everything we watch on ABC is already available on Hulu Plus. The ABC app crashed in a similar way as the CBS app. The app would play the commercials, but the actual video would crash.
Overall, the hardware is much butter. Quad cores really makes a difference. The external antenna was nice as well. The unit also stayed a lot cooler than my MK802 III. I'm disappointed mostly because the x7 is a really capable machine. Maybe a future rom will fix the issues.
MX Player Pro - Works
MX Player Pro ran fine on the x7. I even installed a true 1080p kernel and it ran fine. This app is really important to me since my wife and I use it to watch our video collection.
Youtube - Works
The Youtube app worked fine. The interface did run noticable faster on the x7 than my MK802 III. Double the cores really did help here. There was less lag when moving between pages.
Netflix - Works
The Netflix app worked fine.
Skype - Does not work
For some weird reason, I was not able to log into Skype. The app launched, but I couldn't log in. Since I couldn't log in, I wasn't able to test if the camera I have actually worked.
Hulu Plus - Does not work
I tried various version of the Hulu Plus app and they all ran into the same problem. As soon as you tried to play video, the device crashed. The key word here is the device, not the app. My TV went blue, saying there was no signal coming from the HDMI cable. I had to unplug the device to force a restart to recover. This behavior was not exclusive to Hulu Plus.
CBS - Does not work
One of the driving factors behind upgrading to the x7 was the CBS app. While I knew the app wasn't officially supported, I was pretty sure it would run on most modern Android TVs. I was running out of "app space" on my MK802 III, though. The upgrade to the x7 was supported to give me more breathing room since more and more media apps were coming out.
Unfortunately the CBS app exhibited almost the same behavior as the Hulu Plus app, with one minor difference. While the Hulu Plus app crashed right away, the CBS app would crash after the first set of commercials played. I wonder if the crash is related to DRM.
ABC - Does not work
The CBS and ABC apps were released around the same time. While it would be nice to have the ABC app, it wasn't as important as the CBS app. Everything we watch on ABC is already available on Hulu Plus. The ABC app crashed in a similar way as the CBS app. The app would play the commercials, but the actual video would crash.
Overall, the hardware is much butter. Quad cores really makes a difference. The external antenna was nice as well. The unit also stayed a lot cooler than my MK802 III. I'm disappointed mostly because the x7 is a really capable machine. Maybe a future rom will fix the issues.
Tuesday, November 5, 2013
IBM Kicks Twitter in the Patents (Ouch)
In another example of patent absurdness, IBM is suing Twitter for patent infringement right before Twitter's IPO. I'm a technical person, so I like to read the information about the patents whenever a software patent is filed. I read some details about the first patent: 6,957,224. That is when I noticed something really interesting. I just invented that!
Now I'm not saying I'm the original inventor. Far from it. What I am saying is the patent isn't anything revolutionary. At the highest level, it is just applying a lookup table to url shortening. It has a bit about "proxying", which I did as well. Here is how I infringed on this patent:
I am writing a new video on demand system. This is the 3rd server rewrite that I'm going. Right before I started to write this version, I stumbled upon thetvdb.com. TVDB provides an API that allows me to get episode descriptions and screenshots. I also get banners for shows and seasons. I decided that my new VOD system would display these banners and screenshots but I didn't want to consume too much of TVDB's bandwidth. I decided to cache the images.
I came up with a url shortening scheme. I created a simple lookup table. One column is the url of the image I want to cache. The second column is the MD5 hash of the url. This column has an index on it. I chose MD5 because it is fast and I didn't need it to be cryptographically secure. When the VOD frontend calls my server, the server gives a shortend url. That url looks like /cache/${MD5_HASH}.jpg. I wrote a servlet that maps to /cache/*. That servlet will take the MD5 hash and see if a file exists in the cache directory with that name. If it does not exist, then the servlet will perform a lookup in the lookup table to figure out which url maps to that MD5 hash. The servlet then downloads the url to the cache directory. Now that the cache file exists, it will send back the file (or a 304 to allow browser caches to work). The VOD frontend can now "download" the image over and over again without impacting the availability of TVDB.
This is not a unique problem that I solved. Also, this is not a unique solution. This is why Twitter implemented it. It is a good way of handling shortened urls. Using a lookup table is an obvious step. Adding an index to the lookup table is also obvious. Putting a proxy service is once again an obvious step. All of these elements makes it easier to use the system. It is just good software design. It is a shame that using all of those together to make urls shorter constitutes patent infringement.
Now I'm not saying I'm the original inventor. Far from it. What I am saying is the patent isn't anything revolutionary. At the highest level, it is just applying a lookup table to url shortening. It has a bit about "proxying", which I did as well. Here is how I infringed on this patent:
I am writing a new video on demand system. This is the 3rd server rewrite that I'm going. Right before I started to write this version, I stumbled upon thetvdb.com. TVDB provides an API that allows me to get episode descriptions and screenshots. I also get banners for shows and seasons. I decided that my new VOD system would display these banners and screenshots but I didn't want to consume too much of TVDB's bandwidth. I decided to cache the images.
I came up with a url shortening scheme. I created a simple lookup table. One column is the url of the image I want to cache. The second column is the MD5 hash of the url. This column has an index on it. I chose MD5 because it is fast and I didn't need it to be cryptographically secure. When the VOD frontend calls my server, the server gives a shortend url. That url looks like /cache/${MD5_HASH}.jpg. I wrote a servlet that maps to /cache/*. That servlet will take the MD5 hash and see if a file exists in the cache directory with that name. If it does not exist, then the servlet will perform a lookup in the lookup table to figure out which url maps to that MD5 hash. The servlet then downloads the url to the cache directory. Now that the cache file exists, it will send back the file (or a 304 to allow browser caches to work). The VOD frontend can now "download" the image over and over again without impacting the availability of TVDB.
This is not a unique problem that I solved. Also, this is not a unique solution. This is why Twitter implemented it. It is a good way of handling shortened urls. Using a lookup table is an obvious step. Adding an index to the lookup table is also obvious. Putting a proxy service is once again an obvious step. All of these elements makes it easier to use the system. It is just good software design. It is a shame that using all of those together to make urls shorter constitutes patent infringement.
Thursday, October 31, 2013
Apple should be spelled with all dollar signs
I use Linux. I have used Linux for many years. I have gotten used to being treated like a third class citizen by software companies. That is why when I write software, I design the software so that it either runs on most platforms or that it can easily be ported to most platforms. This includes Mac. To me, Mac people are considered second class citizens. I empathize. That is why I am so irritated at Apple's methods to force people to pay money.
First, here is a little background. Mac has a file with an extension of .app. This file contains an entire application. This makes it really easy to install and uninstall applications. You crop a .app file somewhere to install it. The problem with that is that method for installing software is REALLY insecure. Unbelievably insecure. Mac likes to pride itself at the illusion of security, so it decided to "fix" this problem.
The fix for the problem is actually pretty easy. The .app file supports a digital signature. This means not only can you verify who made the .app file, you can verify that Apple "trusts" the person who made the .app file. This type of thing is fairly standard in the industry. The problem is who does Apple "trust"?
It turns out Apple only trusts you if you pay them $99/year. If you are developing an app for their store, then $99 might not be too bad. My problem is I am writing FREE software that I planned on giving to my aunt and a few friends that run Mac. For my friends and family to use the software that I wrote, I have to pay Apple $99 a year.
This is just ridiculous. I'm not going to pay $99 a year to give my family software to run. Apple's behavior is very anti-open source. Their goal is to force people to pay Apple in any way possible. Which brings me to the next pain......being forced to buy their hardware!
First, here is a little background. Mac has a file with an extension of .app. This file contains an entire application. This makes it really easy to install and uninstall applications. You crop a .app file somewhere to install it. The problem with that is that method for installing software is REALLY insecure. Unbelievably insecure. Mac likes to pride itself at the illusion of security, so it decided to "fix" this problem.
The fix for the problem is actually pretty easy. The .app file supports a digital signature. This means not only can you verify who made the .app file, you can verify that Apple "trusts" the person who made the .app file. This type of thing is fairly standard in the industry. The problem is who does Apple "trust"?
It turns out Apple only trusts you if you pay them $99/year. If you are developing an app for their store, then $99 might not be too bad. My problem is I am writing FREE software that I planned on giving to my aunt and a few friends that run Mac. For my friends and family to use the software that I wrote, I have to pay Apple $99 a year.
This is just ridiculous. I'm not going to pay $99 a year to give my family software to run. Apple's behavior is very anti-open source. Their goal is to force people to pay Apple in any way possible. Which brings me to the next pain......being forced to buy their hardware!
Wednesday, October 16, 2013
New Application: TraySync
A few months ago, I set up a shared folder on my Dropbox to share pictures of my son with my family. Dropbox is well known to the non-techie world and is very easy to install. The setup has worked phenomenally well for months. Then we started taking video. HD Video. Large HD Video. I filled up everyone's Dropbox very fast. For those of you who don't know, files in shared folders on Dropbox count towards everyone's quota.
One of the other goals with Dropbox was to allow family members to upload pictures that they took. During any family events, everyone brings their own camera. Everyone wants to take pictures. I think this is driven by history: when you had to pay to develop film, you wanted to make sure you got your own "copy" of the pictures being taken. Now, everyone has digital cameras. By having an easy way for people to share pictures, two things can occur (in theory): 1) people have easier access to everyone else's pictures and 2) less people want to take pictures because of #1. During the first month, this was true. People were uploading pictures pretty regularly. After that, pictures were not uploaded as often.
I have blogged multiple times about Retroshare but it was just not user friendly enough for the grandparents to use. With Dropbox, pictures and videos just magically showed up in a folder somewhere. I wanted to replicate that same experience. Due to the minimal amount of uploads, I decided to drop that requirement. Besides, the Dropbox folder still exists, so people could still upload there.
I decided to write a program that sits in your system tray, polling for changes on an http server. It downloads a text file once an hour. The text file consists of a bunch of http urls, one for each picture or video. The software then downloads each item in the text file. If-Modified-Since headers are used to limit the bandwidth that is used. If the http get for the text file comes back with a 304, then the software doesn't even bother firing off requests for the files in the text file. All downloads are password protected.
The tray icon changes if it is downloading something. After a round of files is downloaded, a notification window pops up, telling you how many files were downloaded. The app has a configuration window that allows you to add multiple "repositories" and displays a table of the most recent downloads.
The software is written in Java 1.6. For Windows, I used Nullsoft Scriptable Installer to create an install exe file. The installer adds a shortcut in the "Startup" folder so that the app starts up boot. I even made an unattended install that sets up the initial repository and starts the app for the grandparents.
I have not decided if I am going to put the software up on Sourceforge or Github, but it will be open source. As of now, there will not be a Mac release. I will put a rant about that in another post.
One of the other goals with Dropbox was to allow family members to upload pictures that they took. During any family events, everyone brings their own camera. Everyone wants to take pictures. I think this is driven by history: when you had to pay to develop film, you wanted to make sure you got your own "copy" of the pictures being taken. Now, everyone has digital cameras. By having an easy way for people to share pictures, two things can occur (in theory): 1) people have easier access to everyone else's pictures and 2) less people want to take pictures because of #1. During the first month, this was true. People were uploading pictures pretty regularly. After that, pictures were not uploaded as often.
I have blogged multiple times about Retroshare but it was just not user friendly enough for the grandparents to use. With Dropbox, pictures and videos just magically showed up in a folder somewhere. I wanted to replicate that same experience. Due to the minimal amount of uploads, I decided to drop that requirement. Besides, the Dropbox folder still exists, so people could still upload there.
I decided to write a program that sits in your system tray, polling for changes on an http server. It downloads a text file once an hour. The text file consists of a bunch of http urls, one for each picture or video. The software then downloads each item in the text file. If-Modified-Since headers are used to limit the bandwidth that is used. If the http get for the text file comes back with a 304, then the software doesn't even bother firing off requests for the files in the text file. All downloads are password protected.
The tray icon changes if it is downloading something. After a round of files is downloaded, a notification window pops up, telling you how many files were downloaded. The app has a configuration window that allows you to add multiple "repositories" and displays a table of the most recent downloads.
The software is written in Java 1.6. For Windows, I used Nullsoft Scriptable Installer to create an install exe file. The installer adds a shortcut in the "Startup" folder so that the app starts up boot. I even made an unattended install that sets up the initial repository and starts the app for the grandparents.
I have not decided if I am going to put the software up on Sourceforge or Github, but it will be open source. As of now, there will not be a Mac release. I will put a rant about that in another post.
Tuesday, October 15, 2013
Missing In Action
Sorry I have been missing lately. I have had very little time for tech activities and the blog was temporary lower in priority. I have been spending my tech time writing some software. I will be releasing the software soon, and when I do, I will write a post about it. I will also post a rant about Apple related to this software. I have some new hardware (Minix Neo X7) that didn't pan out that I will blog about as well.
Thursday, September 19, 2013
Fear of Breaking Production Code
At every single place I have worked, there has always been this fear of breaking code that already works. We are never allowed to update code without a "business reason". This is really frustrating for me. I like creating common code that allows reuse. The reusable code often brings down the cost of implementing new requirements, but that is very hard to quantify. Therefore, I'm stuck with old code that I am forced to "maintain" but I'm not allowed to "update".
Most recently, I was working on a page that had a few legal disclaimers on the bottom. After the disclaimers was a link to a page that had more verbose disclaimers. When I asked about the details of the disclaimer page, I was told that it already existed; that I should just use the existing page. I dove into the code and found that the disclaimer page was NOT implemented in a way that promoted code reuse. When someone else needed that disclaimer page, they resorted to copy-and-paste programming! This left me in a bad situation. My business would not accept the idea that I couldn't just reuse the existing disclaimer page. In their view, someone else was able to reuse it, therefore I must be lying. Therefore, the cheap thing to do would be to just copy-and-page all over again. I don't like doing that. Alternatively, I could re-implement the disclaimer page in a way that is reusable. All developers could call a new library that I write that displays the disclaimer page for them. This makes it cheaper to implement the requirement in the future all while not costing a lot of money: the reusable logic actually isn't that complicated. You just have to implement it was reusablity in mind the first time around. I chose to go with this option, but I now have two different implementations of the same page! I'm not allowed to retrofit the new implementation, since that could "break working code". There is no guarantee that the next time this requirement is added, that developer will use my reusable version. In fact, the business will probably tell them not to talk to me since I gave them a hard time in the first place.
This was just the latest example. I have run into this multiple times. It doesn't matter that you can improve the code. Changing the code means risk. Everyone is afraid of that risk; especially if they don't understand that there is a cost benefit. To me, there should have been test cases and automated tests that mitigate those risks. What irritates me the most is the people who tend to be really against software maintenance are also the ones that tell me automated unit tests are a waste of time!
Software should have automated test cases. Test cases and requirements should be clearly documented (I am still bad at this one). Code coverage should be reasonably high (I let developers define "reasonable"). There should be a clear, repeatable elevation plan that supports rollback. If you have all of these things, the risk should be so small, that you should be able to perform software maintenance tasks while minimizing the risk.
Most recently, I was working on a page that had a few legal disclaimers on the bottom. After the disclaimers was a link to a page that had more verbose disclaimers. When I asked about the details of the disclaimer page, I was told that it already existed; that I should just use the existing page. I dove into the code and found that the disclaimer page was NOT implemented in a way that promoted code reuse. When someone else needed that disclaimer page, they resorted to copy-and-paste programming! This left me in a bad situation. My business would not accept the idea that I couldn't just reuse the existing disclaimer page. In their view, someone else was able to reuse it, therefore I must be lying. Therefore, the cheap thing to do would be to just copy-and-page all over again. I don't like doing that. Alternatively, I could re-implement the disclaimer page in a way that is reusable. All developers could call a new library that I write that displays the disclaimer page for them. This makes it cheaper to implement the requirement in the future all while not costing a lot of money: the reusable logic actually isn't that complicated. You just have to implement it was reusablity in mind the first time around. I chose to go with this option, but I now have two different implementations of the same page! I'm not allowed to retrofit the new implementation, since that could "break working code". There is no guarantee that the next time this requirement is added, that developer will use my reusable version. In fact, the business will probably tell them not to talk to me since I gave them a hard time in the first place.
This was just the latest example. I have run into this multiple times. It doesn't matter that you can improve the code. Changing the code means risk. Everyone is afraid of that risk; especially if they don't understand that there is a cost benefit. To me, there should have been test cases and automated tests that mitigate those risks. What irritates me the most is the people who tend to be really against software maintenance are also the ones that tell me automated unit tests are a waste of time!
Software should have automated test cases. Test cases and requirements should be clearly documented (I am still bad at this one). Code coverage should be reasonably high (I let developers define "reasonable"). There should be a clear, repeatable elevation plan that supports rollback. If you have all of these things, the risk should be so small, that you should be able to perform software maintenance tasks while minimizing the risk.
Wednesday, September 18, 2013
The need for "rock star programmers"
A few days ago, there was a post on Slashdot questioning the need for "rock star programmers". I took some issue with that post. I'm not going to question the definition of a rock star programmer. I'm not going to claim to be a rock star programmer. My issue with the post is when a company should use a rock star programmer. The author of the post classifies programs on a scale from 1 to 10 based on complexity/difficulty. He claims rock star programmers are only needed when solving a 10. Therefore, most companies don't need rock star programmers because they usually don't have problems above a 6 or a 7. I disagree with that assertion. I don't look at good programmers as people who tackle difficult problems. To me, a good programmer provides a good solution to the problem, regardless of the difficulty/complexity. If you rate solutions from 1 to 10, then rock star programmers can create a 10 solution to 1 problem.
I think the quality of the solution is more important than the fact that a person has found a solution. I have seen bad solutions where ever I go. If your company has a bunch of 6 problems, but you get a bunch of 1 solutions, then your infrastructure is just plain bad.
Hire rock star programmers. Hire good programmers. Get good solutions. Have a good infrastructure. Don't just solve the problem as cheaply (poorly) as you can. A rock star programmer can crank out a 10 solution in the time it takes a weak programmer to create a 1 solution.
I think the quality of the solution is more important than the fact that a person has found a solution. I have seen bad solutions where ever I go. If your company has a bunch of 6 problems, but you get a bunch of 1 solutions, then your infrastructure is just plain bad.
Hire rock star programmers. Hire good programmers. Get good solutions. Have a good infrastructure. Don't just solve the problem as cheaply (poorly) as you can. A rock star programmer can crank out a 10 solution in the time it takes a weak programmer to create a 1 solution.
Monday, September 16, 2013
64bit iPhone: Innovation or Buzz
Over ten years ago, I was in a Gamestop with a buddy of mine. We were in the PC gaming section (you know, when Gamestop still had a PC section) talking about some of the new PC games coming out. Someone we didn't know came over wanting to talk to us. He then changed the subject asking what we thought about 64bit CPUs and whether they would take off. My buddy (who reads this blog so he might comment) decided to troll him and say "Sun has been using 64bit for years". The guy had this weird confused look on his face. I laughed but felt bad for the guy.
Just like servers then desktops after them, all phones will run a 64bit CPU. The question is when. Apple has decided now, but is that just for show or do they have something in store for us? ARMv8 is still really know and was designed for ARM to break into the server market, so is this the right move for Apple?
First of all, there is a performance benefit for using 64bit. The benefits can be lumped into two categories: math and memory throughput. Lets break math down into two more sub-categories. First, there is the 64bit arithmetic. Very few programs need to do 64bit arithmetic. 64bit arithmetic is so slow on 32bit CPUs, that programs generally don't do it unless they absolutely have to. These programs can expect a performance boost by running on a 64bit CPU. 3D games generally fall into this category. 2D games do not, however.
The second sub-category is when you are running the same arithmetic over a set of data. The 64bit arithmetic in theory would allow you do double the performance of 32bit arithmetic by treating two 32bit add operations as a single 64bit add operation. 32bit ARM processors generally already have support for 64bit and 128bit SIMD operations. So, you generally won't see that much of an improvement there.
The other performance benefit is in memory operations. When using registers to copy memory, you generally copy one "word" at a time. in a 32bit CPU, this means you copy 4 bytes of data for every copy operation. When moving semi-large amounts of data, like 4kb, you are using a lot of cycles. You can cut the cycle time in half when your word size doubles to 64bits. The most common use case for this is image copy operations, sometimes referred to as a Blit. The general interface could see a speedup due to this.
Like most performance gains, there is a memory trade off. Pointers are generally represented using a word size. So, doubling the word size to 64bits means doubling the amount of memory a process uses for pointers. This can amount to a much larger memory footprint for an application. This is really bad for low memory devices like mobile phones.
This issue segways us into the end of the real benefits of 64bit and into the marketing hype. If you listen to the hype, then having a larger memory footprint might not be a big deal since 64bit word sizes allow you to have more memory! Although that is true, it is also pretty irrelevant in this case. Although it is true that 32bit word sizes limit the addressable memory, they limit it to 4GB. That is 4 times the RAM size of the iPhone 5. Obviously being 32bit is NOT a limiting factory. Therefore, the extra addressable memory advantage is hype and doesn't help with the very real probably of your apps now having a larger memory footprint. In fact, Unix can play games with addressable memory to make the whole upgrade to 64bit unnecessary (for now).
Here is where the true power of upgrading to 64bit sits. I'm actually disappointed in Apple's marketing because I feel like they missed an opportunity here. I guess when your customer base always buys the latest version, marketing the longevity of a device is not as important as it obviously is to the Apple engineers. So far, Apple hasn't said a lot about the possibility of releasing apps that work in both iOS and OSX. The reason for this is easily explainable: the software technology isn't ready yet. But here is the thing, software can be upgraded after the fact. Just because the hardware team got there first doesn't mean you should postpone the whole project. Apple's engineers are positioning themselves in a way that will revolutionize mobile devices. I have talked about this idea before. The merging of desktop/laptop/tablet and phone can be very big. Although Apple isn't tackling it the way I would have handled it, at least they are tackling it.
In the end, 64bit is mostly hype for now, but by releasing 64bit now, Apple ensures that when they start making some major changes in how we view mobile devices, the iPhone 5s won't be left in the dust.
Just like servers then desktops after them, all phones will run a 64bit CPU. The question is when. Apple has decided now, but is that just for show or do they have something in store for us? ARMv8 is still really know and was designed for ARM to break into the server market, so is this the right move for Apple?
First of all, there is a performance benefit for using 64bit. The benefits can be lumped into two categories: math and memory throughput. Lets break math down into two more sub-categories. First, there is the 64bit arithmetic. Very few programs need to do 64bit arithmetic. 64bit arithmetic is so slow on 32bit CPUs, that programs generally don't do it unless they absolutely have to. These programs can expect a performance boost by running on a 64bit CPU. 3D games generally fall into this category. 2D games do not, however.
The second sub-category is when you are running the same arithmetic over a set of data. The 64bit arithmetic in theory would allow you do double the performance of 32bit arithmetic by treating two 32bit add operations as a single 64bit add operation. 32bit ARM processors generally already have support for 64bit and 128bit SIMD operations. So, you generally won't see that much of an improvement there.
The other performance benefit is in memory operations. When using registers to copy memory, you generally copy one "word" at a time. in a 32bit CPU, this means you copy 4 bytes of data for every copy operation. When moving semi-large amounts of data, like 4kb, you are using a lot of cycles. You can cut the cycle time in half when your word size doubles to 64bits. The most common use case for this is image copy operations, sometimes referred to as a Blit. The general interface could see a speedup due to this.
Like most performance gains, there is a memory trade off. Pointers are generally represented using a word size. So, doubling the word size to 64bits means doubling the amount of memory a process uses for pointers. This can amount to a much larger memory footprint for an application. This is really bad for low memory devices like mobile phones.
This issue segways us into the end of the real benefits of 64bit and into the marketing hype. If you listen to the hype, then having a larger memory footprint might not be a big deal since 64bit word sizes allow you to have more memory! Although that is true, it is also pretty irrelevant in this case. Although it is true that 32bit word sizes limit the addressable memory, they limit it to 4GB. That is 4 times the RAM size of the iPhone 5. Obviously being 32bit is NOT a limiting factory. Therefore, the extra addressable memory advantage is hype and doesn't help with the very real probably of your apps now having a larger memory footprint. In fact, Unix can play games with addressable memory to make the whole upgrade to 64bit unnecessary (for now).
Here is where the true power of upgrading to 64bit sits. I'm actually disappointed in Apple's marketing because I feel like they missed an opportunity here. I guess when your customer base always buys the latest version, marketing the longevity of a device is not as important as it obviously is to the Apple engineers. So far, Apple hasn't said a lot about the possibility of releasing apps that work in both iOS and OSX. The reason for this is easily explainable: the software technology isn't ready yet. But here is the thing, software can be upgraded after the fact. Just because the hardware team got there first doesn't mean you should postpone the whole project. Apple's engineers are positioning themselves in a way that will revolutionize mobile devices. I have talked about this idea before. The merging of desktop/laptop/tablet and phone can be very big. Although Apple isn't tackling it the way I would have handled it, at least they are tackling it.
In the end, 64bit is mostly hype for now, but by releasing 64bit now, Apple ensures that when they start making some major changes in how we view mobile devices, the iPhone 5s won't be left in the dust.
Tuesday, September 10, 2013
Limiting Facebook they way we all should
My phone has a pre-Facebook Home version of the Facebook app. For those of you who don't know, Facebook Home is an alternative Launcher for Android that turns your entire experience into a Facebook experience. There is another detail that a lot of people don't know. A lot of the logic to implement Home is not in the Home app, it is in the main app. This means the main app now requires access to make phone calls and get a list of running apps. This is way too much for Facebook to have, so I never upgraded the main app.
Recently, I have been getting messages in my feed in the app telling me that my version of the app might stop functioning soon. As a (new) mobile developer, I can appreciate the need to limit the number of versions a company has to support. I still don't want to upgrade because the last thing I need is a Facebook worm that forces my phone to dial 1-900 numbers.
I took to Facebook to complain about this problem, knowing that Facebook's NSA-style monitoring might kick in. I expressed my design to have some sort of Access Control List to disable permissions for any app that I have installed. Out of all the features of 4.3, the app ACLs were the feature I was most looking forward too. When my Nexus 7 got the 4.3 upgrade, I looked, but couldn't find the feature. That is when one of my coworkers pointed out that the ACL (App Ops) is hidden by default. She told me I could go to the market to get an app that allows me to launch the hidden App Ops Activity. Once there, I could disable some of the permissions that I don't want Facebook to have (Facebook can no longer read my contacts!)
Now, the only problem is my phone (AT&T Samsung Galaxy SII Skyrocket - SGH-I727) only supports up to 4.1.2. I might have to wait a long time (if ever) before getting 4.3. Cyanogenmod has nightlies of 10.2, which is based on Android 4.3. I might have to root my phone and install CM.
Recently, I have been getting messages in my feed in the app telling me that my version of the app might stop functioning soon. As a (new) mobile developer, I can appreciate the need to limit the number of versions a company has to support. I still don't want to upgrade because the last thing I need is a Facebook worm that forces my phone to dial 1-900 numbers.
I took to Facebook to complain about this problem, knowing that Facebook's NSA-style monitoring might kick in. I expressed my design to have some sort of Access Control List to disable permissions for any app that I have installed. Out of all the features of 4.3, the app ACLs were the feature I was most looking forward too. When my Nexus 7 got the 4.3 upgrade, I looked, but couldn't find the feature. That is when one of my coworkers pointed out that the ACL (App Ops) is hidden by default. She told me I could go to the market to get an app that allows me to launch the hidden App Ops Activity. Once there, I could disable some of the permissions that I don't want Facebook to have (Facebook can no longer read my contacts!)
Now, the only problem is my phone (AT&T Samsung Galaxy SII Skyrocket - SGH-I727) only supports up to 4.1.2. I might have to wait a long time (if ever) before getting 4.3. Cyanogenmod has nightlies of 10.2, which is based on Android 4.3. I might have to root my phone and install CM.
Monday, September 9, 2013
Foscam FI8918W Wifi not working: Try 2
In a previous post, I talked about the problems I had with a Foscam FI8919W. The wifi was cutting out, so I had to resort to using an ethernet cable. Once I started using the ethernet cable, everything was working fine. I didn't know if I had a dud or if it is a problem with the line of cameras. Because of that, when I had a need for another camera, I decided to purchase another one. Unfortunately, that camera had similar issues.
I bought the new camera to sit inside my living room, pointing out the front window. Although I could run ethernet, I really wanted to use the wifi. The first camera didn't quite have line of site. The wifi signal did have to travel through the floor to get to the router, but the router was directly under the camera. For this second camera, I have a secondary access point in the living room. The access point is literally 10 feet away from the camera. The second camera does last longer than the first camera. The first camera would cut out after 5 minutes. The second camera at least lasts a few hours. I don't use the camera all the time, but if every time I want to use the camera I have to physically unplug and plug the power to the camera, then it does become kind of useless to me. On the plus side, since the baby's room already has ethernet, my wife is very excited about possible having a second camera pointing at the crib.
At this point, I can't recommend Foscam to anyone. I did try to give the brand a second chance, but it just didn't work out.
I bought the new camera to sit inside my living room, pointing out the front window. Although I could run ethernet, I really wanted to use the wifi. The first camera didn't quite have line of site. The wifi signal did have to travel through the floor to get to the router, but the router was directly under the camera. For this second camera, I have a secondary access point in the living room. The access point is literally 10 feet away from the camera. The second camera does last longer than the first camera. The first camera would cut out after 5 minutes. The second camera at least lasts a few hours. I don't use the camera all the time, but if every time I want to use the camera I have to physically unplug and plug the power to the camera, then it does become kind of useless to me. On the plus side, since the baby's room already has ethernet, my wife is very excited about possible having a second camera pointing at the crib.
At this point, I can't recommend Foscam to anyone. I did try to give the brand a second chance, but it just didn't work out.
Thursday, September 5, 2013
Considering the Minix Neo x7 over the MK802 IV
In a previous post, I talked about how the MK802 IV had the potential to go mainstream. I believe that the device has failed in that goal. Many reviews talk about problems with Netflix and other apps. Although I am currently a strong supporter of MK802's, that support is wavering. I'm considering for my next purchase a Minix Neo x7. These devices are far larger than MK802's, so they are not as portable, but they do over a lot of features. They have built in ethernet, an external antenna, more USB ports and a remote control. The x7 is the closest thing to a perfect device so far. It is missing one major feature. Right now, the firmware only has a 720p kernel. It will still output at 1080p, but it will be upscaled from 720p. For me, it is not that big of a deal. My bedroom TV is only 720p. I would really like to keep 1080p in my living room. The goal is to get this technology mainstream, however. The mainstream really loves their 1080p. It really is an essential feature. I understand that their are heating issues. Most Android mini-pc's don't have active cooling. The Neo x7 has a large enough case that I am surprised that they don't have a really large heatsink to help with the cooling. They might be able to squeeze in a small fan if needed. I think the x7 is the closest bet on Android TV's going mainstream.
Tuesday, September 3, 2013
USB IO Errors and RAID vs ZFS
About 8 years ago, I created a RAID-5 with 4 hard disks. Two of the disks were IDE and two where USB external disks. Each disk was 250GB, giving me 750GB. Everything seemed to work, so I moved a lot of my data onto the RAID. At some point, the RAID disappeared. I looked at the logs and the two USB external disks got flagged as being bad. Because you can't "lose" 2 disks in a RAID-5, I lost ALL of the data. Here is the thing: the disks weren't bad! That event is when I learned that USB hard disks return random IO errors. This IO errors are very small and are recoverable. The RAID software interpreted the IO errors as a bad sector, though. Once one of the sectors was bad, the entire hard disk was flagged as bad because it didn't have the same capacity as the other disks. It turns out backups are important.
After that incident, I played around with EVMS to create a pool out of the disks. The volume management allowed me to see some interesting effects. Whenever an IO error occurred with one of the USB disks, the available pool size shrank. Since the volume was smaller than the pool size, however, I didn't lose any data like I did with a raw RAID.
I am now playing with ZFS. I have a RAID-Z with SATA hard disks. I started getting IO errors on one of my hard disks. These are real errors related to my brand new 3TB WD drive going bad. I had a refurb 2TB Seagate but I had to plug it in via USB. I remembered my problems with USB and RAID-5, but this was only going to be one disk, not two. I plugged in the disk and replaced the files. Like 8 years ago, everything seemed to work fine. After about a day, the first IO error occurred. ZFS handled it just fine. The zpool status command tells you how many IO errors have occurred on the various disks. Because of the CRC capabilities, ZFS is able to handle the problem without marking the entire disk as bad. ZFS gives you the option to either ignore the errors or replace the bad disk. That is what makes a good solution!
After that incident, I played around with EVMS to create a pool out of the disks. The volume management allowed me to see some interesting effects. Whenever an IO error occurred with one of the USB disks, the available pool size shrank. Since the volume was smaller than the pool size, however, I didn't lose any data like I did with a raw RAID.
I am now playing with ZFS. I have a RAID-Z with SATA hard disks. I started getting IO errors on one of my hard disks. These are real errors related to my brand new 3TB WD drive going bad. I had a refurb 2TB Seagate but I had to plug it in via USB. I remembered my problems with USB and RAID-5, but this was only going to be one disk, not two. I plugged in the disk and replaced the files. Like 8 years ago, everything seemed to work fine. After about a day, the first IO error occurred. ZFS handled it just fine. The zpool status command tells you how many IO errors have occurred on the various disks. Because of the CRC capabilities, ZFS is able to handle the problem without marking the entire disk as bad. ZFS gives you the option to either ignore the errors or replace the bad disk. That is what makes a good solution!
Monday, September 2, 2013
When it rains, it pours (dead harddisks)
A few weeks ago was the end of a heat wave. During the heat wave a hard disk died. I RMA'ed that disk, but a week later, another hard disk died. So far, I have not lot much data. I manually stripe files across hard disks. When the first died, I re-striped. When the second died, I did lose some data. It got me a little worried, so I decided to create a ZFS pool with blocks on multiple disks. I started coping all of my data to the pool. Although that experience will be written up in a different blog post, I wanted to bring it up a little bit. ZFS has a WONDERFUL feature that allows it to CRC the entire pool to identify any problems. I ran the CRC a few days ago and discovered that a 3rd hard disk is dying! This one is a brand new disk that I purchased. The only data on it is the ZFS pool blocks, so I can replace it with a new disk, but this is getting kind of ridiculous.
Friday, August 30, 2013
Forced Static Analysis and charAt
I find forced static analysis tools annoying. Don't get me wrong. When I use static analysis tools, I try as much as possible to adhere to those rules (even when I don't agree with some of them). What irritates me is when the "quick fix" for a rule is not "functionally identical". Two lines of code are "functionally identical" when you can interchange the two lines without ANY negative consequences. Static analysis tools like JTest and Sonar sometimes provide a "functionally identical" replacement for a violation. Someones, the quick fix isn't functionally identical and can cause bugs. I find this funny and frustrating at the same time, since the supposed point of static analysis tools is automatically find bugs.
The latest one that has annoyed me is the fact that you aren't supposed to call String.startsWith() with a string that has a length of one. This violation is a performance improvement. There is an alternative call that is NOT functionally identical that in theory performs a lot faster. The quick fix will change the startsWith("A") call with a charAt(0)=='A' call. Here is the theory behind this: startsWith() assumes any length, so it contains a loop. charAt() has a much simpler implementation. It just returns the char at the offset + index from the internal character array. This is not only much faster in of itself, it is also be inlined. This can be pretty important if you are parsing through a large log file searching for a log line that starts with a given character. It works fine....until you have a log line that is EMPTY! That is right; the quick fix for this violation actually causes a bug!
This is where the danger lies with static analysis tools. It is not about the tools themselves. The problem with the tools is non-technical people get overzealous with the rules. A rule may sound good in theory, but it might not be. It gets worse when you "force" the tools by doing something stupid like failing a build because of a violation. In those scenarios, you end up encouraging the developers to use the "Quick Fix" features of the static analysis tools. You end up promoting bugs!
The latest one that has annoyed me is the fact that you aren't supposed to call String.startsWith() with a string that has a length of one. This violation is a performance improvement. There is an alternative call that is NOT functionally identical that in theory performs a lot faster. The quick fix will change the startsWith("A") call with a charAt(0)=='A' call. Here is the theory behind this: startsWith() assumes any length, so it contains a loop. charAt() has a much simpler implementation. It just returns the char at the offset + index from the internal character array. This is not only much faster in of itself, it is also be inlined. This can be pretty important if you are parsing through a large log file searching for a log line that starts with a given character. It works fine....until you have a log line that is EMPTY! That is right; the quick fix for this violation actually causes a bug!
This is where the danger lies with static analysis tools. It is not about the tools themselves. The problem with the tools is non-technical people get overzealous with the rules. A rule may sound good in theory, but it might not be. It gets worse when you "force" the tools by doing something stupid like failing a build because of a violation. In those scenarios, you end up encouraging the developers to use the "Quick Fix" features of the static analysis tools. You end up promoting bugs!
Thursday, August 29, 2013
ZFS and dynamic sized volumes
ZFS has an interesting way of handing volume sizes that took me a bit to figure out. In a normal volume management system, you create a virtual block device (volume) of a fixed size. You can create multiple volumes, but the total disk size of all the volumes cannot exceed the total pool size. In ZFS, things are a bit different. Since volume management is built into ZFS, a volume can have a variable size. For example, lets create am imaginary 2GB pool in ZFS and LVM. Now, create a volume in each called "photos". In LVM, you have to give it a size of 1GB, then format it as EXT3. Now you add 250MB of pictures onto the volume. Next, you want to create a volume for "videos". In the LVM world, you can only create a 1GB volume. If you want more space, you have to grow your pool first. In the ZFS world, the "photos" volume wasn't created with a fixed size (unless you accidentally create it in legacy mode like I did). The df command will report that the "photos" volume started out with 2GB free. Once you add the 250MB of photos, then it would be reported as 1.75GB free. Once you create the "videos" volume, df will report it as having 1.75GB free as well. This means you have the full 2GB of disk space available for files. There is no dead space at the end of volumes!
Wednesday, August 28, 2013
Thoughts on Java 8
A buddy of mine asked about my opinion of Java 8, now that the spec has been finalized. I can't help but feel that Oracle dropped the ball pretty hard on this release. If you look at the list of new features in Java 7 and compare that to Java 8, you will notice a distinct lack of new features in Java 8. Here are a few that I wanted to comment on:
JSR 310: Date and Time API
I understand some people's frustrations with Java's current Date and Time API. This is a welcomed change, but I personally don't mind the current API.
JSR 308: Annotations on Java Types
I have never been happy with Java annotations. I always felt they could have been done a lot better. Although JSR 308 does add some much needed annotations (@ReadOnly, @Immutable), I feel like it still fails to satisfy. In my opinion, annotations should help provide Class invariance, but Java annotations were never good enough to provide that really well. Also, I still feel that annotations should have code-generation abilities. I should be able to annotate a class member variable, and the getters/setters should be generated at compile time. The new annotations still don't deliver.
JSR 335: Lambda functions
Lambda functions can be very helpful. I am excited to finally have them, but the implementation seems reminiscent of Python's implementation, which I didn't like. Python seems one step better, though, since it supports function pointers, where Java still doesn't support function pointers. Maybe its because I can't find any good examples, but Java's implementation doesn't look like it can support complex lambda expressions/functions.
JSR 223: Project Nashorn
I am excited about this. If I understand it correctly, this is a new Javascript engine that makes use of new JVM features that were introduced in Java 7. This should allow Javascript execution to be faster while using less memory (especially in the code cache). I have done a lot with Rhino, so I'm happy about this change.
JSR 310: Date and Time API
I understand some people's frustrations with Java's current Date and Time API. This is a welcomed change, but I personally don't mind the current API.
JSR 308: Annotations on Java Types
I have never been happy with Java annotations. I always felt they could have been done a lot better. Although JSR 308 does add some much needed annotations (@ReadOnly, @Immutable), I feel like it still fails to satisfy. In my opinion, annotations should help provide Class invariance, but Java annotations were never good enough to provide that really well. Also, I still feel that annotations should have code-generation abilities. I should be able to annotate a class member variable, and the getters/setters should be generated at compile time. The new annotations still don't deliver.
JSR 335: Lambda functions
Lambda functions can be very helpful. I am excited to finally have them, but the implementation seems reminiscent of Python's implementation, which I didn't like. Python seems one step better, though, since it supports function pointers, where Java still doesn't support function pointers. Maybe its because I can't find any good examples, but Java's implementation doesn't look like it can support complex lambda expressions/functions.
JSR 223: Project Nashorn
I am excited about this. If I understand it correctly, this is a new Javascript engine that makes use of new JVM features that were introduced in Java 7. This should allow Javascript execution to be faster while using less memory (especially in the code cache). I have done a lot with Rhino, so I'm happy about this change.
Tuesday, August 27, 2013
The Great Bitcoin Arms Race
I got my first back deposit for mining bitcoins. My wife didn't fully understand what I was doing, so I pulled up a Khan Academy video that explains what bitcoints are. During that video, I learned something new about bitcoin mining. I (like many other people) was under the assumption that the faster you can compute hashes, the faster you mine bitcoins. This turns out not to be true. The truth is, the faster you can compute hashes relative to everyone else that is mining bitcoins, the faster you mine bitcoins. This turns out to be a huge distinction. This distinction creates an atmosphere where miners must constantly invest in mining hardware just to maintain their mining rate. Building mining rigs becomes an arms race against all other miners.
Bitcoin blocks are mined at a rate of around 2016 blocks every two weeks. If it takes less than two weeks to mine the last 2016 blocks, then all the miners in the world "decide" to make the mining algorithm harder to solve. They do this until the mining rate is back up to 2016 blocks every two weeks. This forces the mining rate to stay roughly constant. Any improvements in hardware only give the miner a temporary advantage until everyone else has an opportunity to catch up.
Below is a table of awards 4 fictional people would get over a period of 56 weeks. In this scenario, Bob is very eager to earn bitcoins. He has money to invest into his mining rig, so he is constantly buying better hardware to increase his share. Mary is an active miner, but doesn't go out and buy the latest hardware right away. Alice and Steve are casual miners who keep up with mining, but don't go out and buy hardware until they have to.
Below is a chart of what each person hot as a reward for their effort.
Bitcoin blocks are mined at a rate of around 2016 blocks every two weeks. If it takes less than two weeks to mine the last 2016 blocks, then all the miners in the world "decide" to make the mining algorithm harder to solve. They do this until the mining rate is back up to 2016 blocks every two weeks. This forces the mining rate to stay roughly constant. Any improvements in hardware only give the miner a temporary advantage until everyone else has an opportunity to catch up.
Below is a table of awards 4 fictional people would get over a period of 56 weeks. In this scenario, Bob is very eager to earn bitcoins. He has money to invest into his mining rig, so he is constantly buying better hardware to increase his share. Mary is an active miner, but doesn't go out and buy the latest hardware right away. Alice and Steve are casual miners who keep up with mining, but don't go out and buy hardware until they have to.
Period | Bob | Mary | Alice | Steve | ||||||||
Type | Rate | Award | Type | Rate | Award | Type | Rate | Award | Type | Rate | Award | |
1 | CPU | 10 | 504 | CPU | 10 | 504 | CPU | 10 | 504 | CPU | 10 | 504 |
2 | GPU | 100 | 1551 | CPU | 10 | 155 | CPU | 10 | 155 | CPU | 10 | 155 |
3 | GPU | 100 | 916 | GPU | 100 | 916 | CPU | 10 | 92 | CPU | 10 | 92 |
4 | GPU | 100 | 650 | GPU | 100 | 650 | GPU | 100 | 650 | CPU | 10 | 65 |
5 | GPU | 100 | 504 | GPU | 100 | 504 | GPU | 100 | 504 | GPU | 100 | 504 |
6 | FPGA | 250 | 916 | GPU | 100 | 367 | GPU | 100 | 367 | GPU | 100 | 367 |
7 | FPGA | 250 | 720 | FPGA | 250 | 720 | GPU | 100 | 288 | GPU | 100 | 288 |
8 | FPGA | 250 | 593 | FPGA | 250 | 593 | FPGA | 250 | 593 | GPU | 100 | 237 |
9 | FPGA | 250 | 504 | FPGA | 250 | 504 | FPGA | 250 | 504 | FPGA | 250 | 504 |
10 | ASIC | 10000 | 1875 | FPGA | 250 | 47 | FPGA | 250 | 47 | FPGA | 250 | 47 |
11 | ASIC | 10000 | 983 | ASIC | 10000 | 983 | FPGA | 250 | 25 | FPGA | 250 | 25 |
12 | ASIC | 10000 | 666 | ASIC | 10000 | 666 | ASIC | 10000 | 666 | FPGA | 250 | 17 |
13 | ASIC | 10000 | 504 | ASIC | 10000 | 504 | ASIC | 10000 | 504 | ASIC | 10000 | 504 |
Below is a chart of what each person hot as a reward for their effort.
In the chart, all 4 people start out with equal shares of the award. Bob decides to buy a GPU because he wanted to get ahead of the other 3 people. His reward shoots up and everyone else's reward shrinks. Mary notices this and decides to buy a GPU to keep up with Bob. By week 6, Mary and Bob are making the same award, which is far larger than Alice and Steve's awards. Alice then Steve each upgrade to using GPUs to keep up with Bob and Mary. By week 10, everyone is making the same exact award as week 2! Bob doesn't like this. He remembers the "golden days" of mining during week 4. He does some research and buys an FPGA that gives him a little bit of an edge. Mary, Alice and Steve notice that their awards have dropped a bit, so they decide they have to invest in FPGA's to stay competitive. By week 18, all 4 are making the same award as week 2. Bob is really upset at this. He has invest lost of money (to make money) and finds another upgrade to buy. He decides to buy an ASIC device and his award jumps sharply. Just like the 2 previous upgrades, it doesn't take long before everyone else has ASICs and everyone is making the same award again.
In this scenario, Bob keeps escalating due to the desire to get a larger award. He has money to invest, so he keeps buying faster and faster hardware. This leaves the other miners in a situation where they either have to invest in faster hardware, or they give up. In a much larger population, there will be people who give up. A lot of people will continue to mine, however. Some may even respond by leap-frogging Bob.
If all the miners agreed to limit the hardware that they purchased, they would make more money. In the long run, each miner would receive the same award, but they wouldn't have the hardware or power costs associated with the escalation. This is a great example of the Prisoner's dilemma, however. Each miner has an incentive to cheat. The cheating only gives them a temporary advantage, but it is still an advantage. This is actually studied pretty heavily in economics. Every miner is essentially part of one big cartel.
In the end, do you participate in the arms race or do you give up?
Subscribe to:
Posts (Atom)