Microsoft hasn't really innovated anything in years. Apple spent a good chunk of the previous decade innovating itself to the top, but their innovating days are behind them. Google doesn't innovate as much as they used to, but they still innovate. There seems to be a high correlation between how much a company innovates in a market, and their market position. Google's Android is at the top of the mobile operating system landscape. Microsoft is near the bottom.
When competitors emerge, companies often have two ways of competing. The first is the try and hold back your competitor. This usually involves patent wars and the Eastern District of Texas. The other is to actually compete by being better than your competition. Microsoft and Apple have chosen the former. The goal of holding back your competitors is to prevent them from reaching your level of technology so that you don't actually have to do anything. It is much easier to do nothing than it is to actually compete. If people are already buying your technology, then there is no point in investing in creating new technology. You can just sit around collecting money. What you care about is your income, not the people providing your income.
The thing about competition is that is creates better products for everything. Apple creates a multitouch smartphone (that is not the first....it just looks appealing to the masses). Google creates a rival operating system but tacks on more features. Apple should add those new features to its phone, but instead decides to lawyer up and file patent suit after patent suit. This does not help Apple's customers. This doesn't help Google's customers. This doesn't help any other company that wants to create a 3rd major mobile operating system. This is the very definition of anti-competitive.
Lets imagine a world where Apple decided to actually compete instead of whining like a little baby. When Google innovates new features, Apple can copy those features. Apple customers would be even happier! Heaven forbid Apple actually innovate something new (and not claim logic next steps as innovation).
JS Ext
Thursday, January 30, 2014
Tuesday, January 28, 2014
Requirements Annotations
While developing some HTML pages for one of my projects, I started to think about requirements. I was doing TDD with the HTML engine (thanks to my XSLT-based solution) and got to thinking about regression testing and requirements. I have never liked requirements docs, but I also see the need for them. Low level requirements docs tend to be tedious. They provide lots of detail, but in a way that makes them unusable. Developers tend to gloss over them. While low level requirements are not written by technical people, they tend to dictate parts of the implementation that can be problematic. Agile tries to tackle some of these issues to make it better for the initial development, but requirements tends to fall by the wayside...mostly because requirements docs are horrible.
I was also thinking about all the unit tests that I wrote. For new developers on a project, they may not understand why a particular unit test exists. You can try to name the unit test better and comment your unit tests, but the big picture is often lost. You can add a comment that lists the requirement name that is getting implemented, but the requirements are often written in MS Word using a number-based outline that causes requirements numbers to change when you add a new requirement. I started to think of a better way of handling this.
First off, imaging having a requirement xml file that is coupled with the item being developed and tested. For example, you have a webpage that is being generated. In my system, that starts with MyPage.xsl. You have beans and logic classes that are coupled with it: MyPageBean.java and MyPageBeanFactory.java. You have a unit test for the page that mocks out the midtier and generates your HTML: TestMyPage.java. Now, add another file to the mix: MyPage.rqdx (requirements document xml). Inside of MyPage.rqdx, you specify all the requirements for MyPage.html. This should include the requirements for what should show up in the generated MyPage.html file, and the client side interations on that page. Each requirement in that xml file gets a unique name. You can even put metadata about the requirement, like when it was requested, when it was implemented, who requested it, maybe even cost-to-implement information. A requirement can link to another requirement. Sometimes requirement B is only needed because of requirement A. This happens when B is the low level requirement (MyPage should have this text as a header) for a high level requirement A (You should have a page called MyPage).
Now, imagine having Java annotations that you can use inside of MyPageBeanFactory.java and TestMyPage.java. You have an annotation @ReqImpl(reqs="req1,req2") that you use to mark methods/blocks that implement a particular requirement. You have an annotation @ReqTests(reqs="req1,req1") that you use to mark unit tests that test a particular requirement. Code, tests and requirements are now linked together.
When you put this all together, a few things now become possible. First, it is much easier for new developers to find out why code was implemented in the first place. They can follow the annotations back to the requirements xml. Requirements are now part of your revision control system. You can see your requirements change over time, just like your code has. You can now translate a failed unit test to the requirement that is impacted. Tools can be written that calculate "requirement coverage" as opposed to code coverage. This is the percentage of requirements that have been tested. A big advantage is code retirement. Since you can search for all code that is related to a specific requirement, you can now search for all code that can be removed due to the removal of a specific requirement. With requirements in revision control, you can create a report of new requirements added per release of your software, just by diffing all the xml files for the current release with the previous release.
Another change that could be interesting for Agile teams is adding un-implemented requirements. When a new requirement is identified, you can add that requirement to the xml file right away. It will sit there as an unimplemented requirement. You can write a tool that calculates the number of unimplemented requirements, just like you had with untested requirements. As developers implement requirements, you create a "burn down" chart of implemented requirements. At the end of every sprint, you have the number of requirements added, removed, implemented and tested.
Overall, this idea could help make a piece of software more maintainable. Tooling could allow the management of requirements to be much more useful. Developers would be able to read/implement requirements easier, since they only see the requirements that are relevent. Project managers can see the progress of implementing requirements. Release managers could easily identify what new features are available in a release. Tying code to requirements makes the maintenance of that code much better.
As a whole, the core of this solution doesn't seem like it would be too hard to implement. The tooling would take the most time, but I think different organizations would want different tooling. I really wish I had more time to run with this.
I was also thinking about all the unit tests that I wrote. For new developers on a project, they may not understand why a particular unit test exists. You can try to name the unit test better and comment your unit tests, but the big picture is often lost. You can add a comment that lists the requirement name that is getting implemented, but the requirements are often written in MS Word using a number-based outline that causes requirements numbers to change when you add a new requirement. I started to think of a better way of handling this.
First off, imaging having a requirement xml file that is coupled with the item being developed and tested. For example, you have a webpage that is being generated. In my system, that starts with MyPage.xsl. You have beans and logic classes that are coupled with it: MyPageBean.java and MyPageBeanFactory.java. You have a unit test for the page that mocks out the midtier and generates your HTML: TestMyPage.java. Now, add another file to the mix: MyPage.rqdx (requirements document xml). Inside of MyPage.rqdx, you specify all the requirements for MyPage.html. This should include the requirements for what should show up in the generated MyPage.html file, and the client side interations on that page. Each requirement in that xml file gets a unique name. You can even put metadata about the requirement, like when it was requested, when it was implemented, who requested it, maybe even cost-to-implement information. A requirement can link to another requirement. Sometimes requirement B is only needed because of requirement A. This happens when B is the low level requirement (MyPage should have this text as a header) for a high level requirement A (You should have a page called MyPage).
Now, imagine having Java annotations that you can use inside of MyPageBeanFactory.java and TestMyPage.java. You have an annotation @ReqImpl(reqs="req1,req2") that you use to mark methods/blocks that implement a particular requirement. You have an annotation @ReqTests(reqs="req1,req1") that you use to mark unit tests that test a particular requirement. Code, tests and requirements are now linked together.
When you put this all together, a few things now become possible. First, it is much easier for new developers to find out why code was implemented in the first place. They can follow the annotations back to the requirements xml. Requirements are now part of your revision control system. You can see your requirements change over time, just like your code has. You can now translate a failed unit test to the requirement that is impacted. Tools can be written that calculate "requirement coverage" as opposed to code coverage. This is the percentage of requirements that have been tested. A big advantage is code retirement. Since you can search for all code that is related to a specific requirement, you can now search for all code that can be removed due to the removal of a specific requirement. With requirements in revision control, you can create a report of new requirements added per release of your software, just by diffing all the xml files for the current release with the previous release.
Another change that could be interesting for Agile teams is adding un-implemented requirements. When a new requirement is identified, you can add that requirement to the xml file right away. It will sit there as an unimplemented requirement. You can write a tool that calculates the number of unimplemented requirements, just like you had with untested requirements. As developers implement requirements, you create a "burn down" chart of implemented requirements. At the end of every sprint, you have the number of requirements added, removed, implemented and tested.
Overall, this idea could help make a piece of software more maintainable. Tooling could allow the management of requirements to be much more useful. Developers would be able to read/implement requirements easier, since they only see the requirements that are relevent. Project managers can see the progress of implementing requirements. Release managers could easily identify what new features are available in a release. Tying code to requirements makes the maintenance of that code much better.
As a whole, the core of this solution doesn't seem like it would be too hard to implement. The tooling would take the most time, but I think different organizations would want different tooling. I really wish I had more time to run with this.
Thursday, January 9, 2014
Android RatingBar not scalable
I am writing an Android app that is going to use the RatingsBar. The problem is the RatingsBar is not scalable. What I mean by this is the images that are used to render the starts are of a fixed size. To make the widget smaller, you use smaller images. To make the widget larger, you use larger images. I haven't found a good way to use one image and have Android scale that image to the size of the widget.
In my app, I have a ListView that contains the RatingBar in each list item. With limited real estate, I wanted the RatingsBar to look smaller. I also had a blue background, so I made the stars yellow. While I felt it was important, it wasn't the most important piece of information in the list item. When you click on the list item, another activity comes up. Since that activity is dedicated to the item, I have a lot more screen real estate. In that screen, I used the bundled Android images. My wife complained. The RatingsBar in the details screen didn't look as good as the one in the list view. She wanted them to look the same....just scaled differently. I fired up Gimp and created another set of star images for that size. Luckily I only have two sizes to display. It would be nice if I could just have one set of images, though.
In my app, I have a ListView that contains the RatingBar in each list item. With limited real estate, I wanted the RatingsBar to look smaller. I also had a blue background, so I made the stars yellow. While I felt it was important, it wasn't the most important piece of information in the list item. When you click on the list item, another activity comes up. Since that activity is dedicated to the item, I have a lot more screen real estate. In that screen, I used the bundled Android images. My wife complained. The RatingsBar in the details screen didn't look as good as the one in the list view. She wanted them to look the same....just scaled differently. I fired up Gimp and created another set of star images for that size. Luckily I only have two sizes to display. It would be nice if I could just have one set of images, though.
Monday, January 6, 2014
Late Requirements
In software engineering, there always seems to be a trend about late requirements. My wife is doing consulting work for a company that keeps adding requirements, but wont push back the deadline. During my most recent company project, I was getting new requirements after the code lockdown date. Then, management started complaining about the large number of defects created after code lockdown. In the Agile world, late requirements aren't supposed to be a big deal. When a requirement is added, another is supposed to be delayed, though. I don't see this happening often.
The problem isn't in the development. It isn't an "IT" problem. It is a client problem. Clients are the ones demanding these requirements. They are the clients and they are the ones signing the pay checks, but clients need to understand that time is finite. Software engineers can't stop time to work on requirements. As a profession, we need to put our feet down and put the ownership of the problem back on the clients. If they want to add late requirements, then they need to agree to extend deadlines.
The problem isn't in the development. It isn't an "IT" problem. It is a client problem. Clients are the ones demanding these requirements. They are the clients and they are the ones signing the pay checks, but clients need to understand that time is finite. Software engineers can't stop time to work on requirements. As a profession, we need to put our feet down and put the ownership of the problem back on the clients. If they want to add late requirements, then they need to agree to extend deadlines.
Friday, January 3, 2014
ZFS Performance Issues on Low Memory Computer (Amazon EC2)
I recently set up a Owncloud on an Amazon EC2 instance. I chose to put the data onto a separate EBS volume. Although EBS supports taking snapshots of a volume, you really shouldn't snapshot a live file system that isn't aware that snapshots can be taken of it. For that reason, I decided to use a file system with snapshot capabilities built into it. I am a huge fan of ZFS, so I got ZFS installed onto my EC2 instance and was good to go.
After about a month of usage, I noticed a performance problem. I SSHed to the VM and noticed that a process called spl_kmem_cache was taking all of my CPU. After some googling, I discovered that the process was related to ZFS'es RAM cache of the disk. ZFS L1 cache is stored in RAM and uses an ARC variant to swap out pages.
The problem is the ZFS L1 cache does not work well in a low memory environment. ZFS was designed for servers, and servers usually have lots of RAM. EC2 micro instances barely have more than 580MB of RAM, though. After consuming most of the RAM, the ZFS L1 cache started to swap constantly, causing the spl_kmem_cache process to use up all my CPU. No RAM and no CPU makes Homer something something. Go crazy? Don't mind if I do!
I read about various L1 cache tweaks that you can set in the /proc file system. None of those helped the situation. I almost gave up hope until I decided to look up disabling the L1 cache. By running the command zfs set primarycache=metadata <poolname>, I disabled L1 caching for data (metadata is still cached). After making the change, my VM came back to life.
After about a month of usage, I noticed a performance problem. I SSHed to the VM and noticed that a process called spl_kmem_cache was taking all of my CPU. After some googling, I discovered that the process was related to ZFS'es RAM cache of the disk. ZFS L1 cache is stored in RAM and uses an ARC variant to swap out pages.
The problem is the ZFS L1 cache does not work well in a low memory environment. ZFS was designed for servers, and servers usually have lots of RAM. EC2 micro instances barely have more than 580MB of RAM, though. After consuming most of the RAM, the ZFS L1 cache started to swap constantly, causing the spl_kmem_cache process to use up all my CPU. No RAM and no CPU makes Homer something something. Go crazy? Don't mind if I do!
I read about various L1 cache tweaks that you can set in the /proc file system. None of those helped the situation. I almost gave up hope until I decided to look up disabling the L1 cache. By running the command zfs set primarycache=metadata <poolname>, I disabled L1 caching for data (metadata is still cached). After making the change, my VM came back to life.
Subscribe to:
Posts (Atom)