At every single place I have worked, there has always been this fear of breaking code that already works. We are never allowed to update code without a "business reason". This is really frustrating for me. I like creating common code that allows reuse. The reusable code often brings down the cost of implementing new requirements, but that is very hard to quantify. Therefore, I'm stuck with old code that I am forced to "maintain" but I'm not allowed to "update".
Most recently, I was working on a page that had a few legal disclaimers on the bottom. After the disclaimers was a link to a page that had more verbose disclaimers. When I asked about the details of the disclaimer page, I was told that it already existed; that I should just use the existing page. I dove into the code and found that the disclaimer page was NOT implemented in a way that promoted code reuse. When someone else needed that disclaimer page, they resorted to copy-and-paste programming! This left me in a bad situation. My business would not accept the idea that I couldn't just reuse the existing disclaimer page. In their view, someone else was able to reuse it, therefore I must be lying. Therefore, the cheap thing to do would be to just copy-and-page all over again. I don't like doing that. Alternatively, I could re-implement the disclaimer page in a way that is reusable. All developers could call a new library that I write that displays the disclaimer page for them. This makes it cheaper to implement the requirement in the future all while not costing a lot of money: the reusable logic actually isn't that complicated. You just have to implement it was reusablity in mind the first time around. I chose to go with this option, but I now have two different implementations of the same page! I'm not allowed to retrofit the new implementation, since that could "break working code". There is no guarantee that the next time this requirement is added, that developer will use my reusable version. In fact, the business will probably tell them not to talk to me since I gave them a hard time in the first place.
This was just the latest example. I have run into this multiple times. It doesn't matter that you can improve the code. Changing the code means risk. Everyone is afraid of that risk; especially if they don't understand that there is a cost benefit. To me, there should have been test cases and automated tests that mitigate those risks. What irritates me the most is the people who tend to be really against software maintenance are also the ones that tell me automated unit tests are a waste of time!
Software should have automated test cases. Test cases and requirements should be clearly documented (I am still bad at this one). Code coverage should be reasonably high (I let developers define "reasonable"). There should be a clear, repeatable elevation plan that supports rollback. If you have all of these things, the risk should be so small, that you should be able to perform software maintenance tasks while minimizing the risk.
JS Ext
Thursday, September 19, 2013
Wednesday, September 18, 2013
The need for "rock star programmers"
A few days ago, there was a post on Slashdot questioning the need for "rock star programmers". I took some issue with that post. I'm not going to question the definition of a rock star programmer. I'm not going to claim to be a rock star programmer. My issue with the post is when a company should use a rock star programmer. The author of the post classifies programs on a scale from 1 to 10 based on complexity/difficulty. He claims rock star programmers are only needed when solving a 10. Therefore, most companies don't need rock star programmers because they usually don't have problems above a 6 or a 7. I disagree with that assertion. I don't look at good programmers as people who tackle difficult problems. To me, a good programmer provides a good solution to the problem, regardless of the difficulty/complexity. If you rate solutions from 1 to 10, then rock star programmers can create a 10 solution to 1 problem.
I think the quality of the solution is more important than the fact that a person has found a solution. I have seen bad solutions where ever I go. If your company has a bunch of 6 problems, but you get a bunch of 1 solutions, then your infrastructure is just plain bad.
Hire rock star programmers. Hire good programmers. Get good solutions. Have a good infrastructure. Don't just solve the problem as cheaply (poorly) as you can. A rock star programmer can crank out a 10 solution in the time it takes a weak programmer to create a 1 solution.
I think the quality of the solution is more important than the fact that a person has found a solution. I have seen bad solutions where ever I go. If your company has a bunch of 6 problems, but you get a bunch of 1 solutions, then your infrastructure is just plain bad.
Hire rock star programmers. Hire good programmers. Get good solutions. Have a good infrastructure. Don't just solve the problem as cheaply (poorly) as you can. A rock star programmer can crank out a 10 solution in the time it takes a weak programmer to create a 1 solution.
Monday, September 16, 2013
64bit iPhone: Innovation or Buzz
Over ten years ago, I was in a Gamestop with a buddy of mine. We were in the PC gaming section (you know, when Gamestop still had a PC section) talking about some of the new PC games coming out. Someone we didn't know came over wanting to talk to us. He then changed the subject asking what we thought about 64bit CPUs and whether they would take off. My buddy (who reads this blog so he might comment) decided to troll him and say "Sun has been using 64bit for years". The guy had this weird confused look on his face. I laughed but felt bad for the guy.
Just like servers then desktops after them, all phones will run a 64bit CPU. The question is when. Apple has decided now, but is that just for show or do they have something in store for us? ARMv8 is still really know and was designed for ARM to break into the server market, so is this the right move for Apple?
First of all, there is a performance benefit for using 64bit. The benefits can be lumped into two categories: math and memory throughput. Lets break math down into two more sub-categories. First, there is the 64bit arithmetic. Very few programs need to do 64bit arithmetic. 64bit arithmetic is so slow on 32bit CPUs, that programs generally don't do it unless they absolutely have to. These programs can expect a performance boost by running on a 64bit CPU. 3D games generally fall into this category. 2D games do not, however.
The second sub-category is when you are running the same arithmetic over a set of data. The 64bit arithmetic in theory would allow you do double the performance of 32bit arithmetic by treating two 32bit add operations as a single 64bit add operation. 32bit ARM processors generally already have support for 64bit and 128bit SIMD operations. So, you generally won't see that much of an improvement there.
The other performance benefit is in memory operations. When using registers to copy memory, you generally copy one "word" at a time. in a 32bit CPU, this means you copy 4 bytes of data for every copy operation. When moving semi-large amounts of data, like 4kb, you are using a lot of cycles. You can cut the cycle time in half when your word size doubles to 64bits. The most common use case for this is image copy operations, sometimes referred to as a Blit. The general interface could see a speedup due to this.
Like most performance gains, there is a memory trade off. Pointers are generally represented using a word size. So, doubling the word size to 64bits means doubling the amount of memory a process uses for pointers. This can amount to a much larger memory footprint for an application. This is really bad for low memory devices like mobile phones.
This issue segways us into the end of the real benefits of 64bit and into the marketing hype. If you listen to the hype, then having a larger memory footprint might not be a big deal since 64bit word sizes allow you to have more memory! Although that is true, it is also pretty irrelevant in this case. Although it is true that 32bit word sizes limit the addressable memory, they limit it to 4GB. That is 4 times the RAM size of the iPhone 5. Obviously being 32bit is NOT a limiting factory. Therefore, the extra addressable memory advantage is hype and doesn't help with the very real probably of your apps now having a larger memory footprint. In fact, Unix can play games with addressable memory to make the whole upgrade to 64bit unnecessary (for now).
Here is where the true power of upgrading to 64bit sits. I'm actually disappointed in Apple's marketing because I feel like they missed an opportunity here. I guess when your customer base always buys the latest version, marketing the longevity of a device is not as important as it obviously is to the Apple engineers. So far, Apple hasn't said a lot about the possibility of releasing apps that work in both iOS and OSX. The reason for this is easily explainable: the software technology isn't ready yet. But here is the thing, software can be upgraded after the fact. Just because the hardware team got there first doesn't mean you should postpone the whole project. Apple's engineers are positioning themselves in a way that will revolutionize mobile devices. I have talked about this idea before. The merging of desktop/laptop/tablet and phone can be very big. Although Apple isn't tackling it the way I would have handled it, at least they are tackling it.
In the end, 64bit is mostly hype for now, but by releasing 64bit now, Apple ensures that when they start making some major changes in how we view mobile devices, the iPhone 5s won't be left in the dust.
Just like servers then desktops after them, all phones will run a 64bit CPU. The question is when. Apple has decided now, but is that just for show or do they have something in store for us? ARMv8 is still really know and was designed for ARM to break into the server market, so is this the right move for Apple?
First of all, there is a performance benefit for using 64bit. The benefits can be lumped into two categories: math and memory throughput. Lets break math down into two more sub-categories. First, there is the 64bit arithmetic. Very few programs need to do 64bit arithmetic. 64bit arithmetic is so slow on 32bit CPUs, that programs generally don't do it unless they absolutely have to. These programs can expect a performance boost by running on a 64bit CPU. 3D games generally fall into this category. 2D games do not, however.
The second sub-category is when you are running the same arithmetic over a set of data. The 64bit arithmetic in theory would allow you do double the performance of 32bit arithmetic by treating two 32bit add operations as a single 64bit add operation. 32bit ARM processors generally already have support for 64bit and 128bit SIMD operations. So, you generally won't see that much of an improvement there.
The other performance benefit is in memory operations. When using registers to copy memory, you generally copy one "word" at a time. in a 32bit CPU, this means you copy 4 bytes of data for every copy operation. When moving semi-large amounts of data, like 4kb, you are using a lot of cycles. You can cut the cycle time in half when your word size doubles to 64bits. The most common use case for this is image copy operations, sometimes referred to as a Blit. The general interface could see a speedup due to this.
Like most performance gains, there is a memory trade off. Pointers are generally represented using a word size. So, doubling the word size to 64bits means doubling the amount of memory a process uses for pointers. This can amount to a much larger memory footprint for an application. This is really bad for low memory devices like mobile phones.
This issue segways us into the end of the real benefits of 64bit and into the marketing hype. If you listen to the hype, then having a larger memory footprint might not be a big deal since 64bit word sizes allow you to have more memory! Although that is true, it is also pretty irrelevant in this case. Although it is true that 32bit word sizes limit the addressable memory, they limit it to 4GB. That is 4 times the RAM size of the iPhone 5. Obviously being 32bit is NOT a limiting factory. Therefore, the extra addressable memory advantage is hype and doesn't help with the very real probably of your apps now having a larger memory footprint. In fact, Unix can play games with addressable memory to make the whole upgrade to 64bit unnecessary (for now).
Here is where the true power of upgrading to 64bit sits. I'm actually disappointed in Apple's marketing because I feel like they missed an opportunity here. I guess when your customer base always buys the latest version, marketing the longevity of a device is not as important as it obviously is to the Apple engineers. So far, Apple hasn't said a lot about the possibility of releasing apps that work in both iOS and OSX. The reason for this is easily explainable: the software technology isn't ready yet. But here is the thing, software can be upgraded after the fact. Just because the hardware team got there first doesn't mean you should postpone the whole project. Apple's engineers are positioning themselves in a way that will revolutionize mobile devices. I have talked about this idea before. The merging of desktop/laptop/tablet and phone can be very big. Although Apple isn't tackling it the way I would have handled it, at least they are tackling it.
In the end, 64bit is mostly hype for now, but by releasing 64bit now, Apple ensures that when they start making some major changes in how we view mobile devices, the iPhone 5s won't be left in the dust.
Tuesday, September 10, 2013
Limiting Facebook they way we all should
My phone has a pre-Facebook Home version of the Facebook app. For those of you who don't know, Facebook Home is an alternative Launcher for Android that turns your entire experience into a Facebook experience. There is another detail that a lot of people don't know. A lot of the logic to implement Home is not in the Home app, it is in the main app. This means the main app now requires access to make phone calls and get a list of running apps. This is way too much for Facebook to have, so I never upgraded the main app.
Recently, I have been getting messages in my feed in the app telling me that my version of the app might stop functioning soon. As a (new) mobile developer, I can appreciate the need to limit the number of versions a company has to support. I still don't want to upgrade because the last thing I need is a Facebook worm that forces my phone to dial 1-900 numbers.
I took to Facebook to complain about this problem, knowing that Facebook's NSA-style monitoring might kick in. I expressed my design to have some sort of Access Control List to disable permissions for any app that I have installed. Out of all the features of 4.3, the app ACLs were the feature I was most looking forward too. When my Nexus 7 got the 4.3 upgrade, I looked, but couldn't find the feature. That is when one of my coworkers pointed out that the ACL (App Ops) is hidden by default. She told me I could go to the market to get an app that allows me to launch the hidden App Ops Activity. Once there, I could disable some of the permissions that I don't want Facebook to have (Facebook can no longer read my contacts!)
Now, the only problem is my phone (AT&T Samsung Galaxy SII Skyrocket - SGH-I727) only supports up to 4.1.2. I might have to wait a long time (if ever) before getting 4.3. Cyanogenmod has nightlies of 10.2, which is based on Android 4.3. I might have to root my phone and install CM.
Recently, I have been getting messages in my feed in the app telling me that my version of the app might stop functioning soon. As a (new) mobile developer, I can appreciate the need to limit the number of versions a company has to support. I still don't want to upgrade because the last thing I need is a Facebook worm that forces my phone to dial 1-900 numbers.
I took to Facebook to complain about this problem, knowing that Facebook's NSA-style monitoring might kick in. I expressed my design to have some sort of Access Control List to disable permissions for any app that I have installed. Out of all the features of 4.3, the app ACLs were the feature I was most looking forward too. When my Nexus 7 got the 4.3 upgrade, I looked, but couldn't find the feature. That is when one of my coworkers pointed out that the ACL (App Ops) is hidden by default. She told me I could go to the market to get an app that allows me to launch the hidden App Ops Activity. Once there, I could disable some of the permissions that I don't want Facebook to have (Facebook can no longer read my contacts!)
Now, the only problem is my phone (AT&T Samsung Galaxy SII Skyrocket - SGH-I727) only supports up to 4.1.2. I might have to wait a long time (if ever) before getting 4.3. Cyanogenmod has nightlies of 10.2, which is based on Android 4.3. I might have to root my phone and install CM.
Monday, September 9, 2013
Foscam FI8918W Wifi not working: Try 2
In a previous post, I talked about the problems I had with a Foscam FI8919W. The wifi was cutting out, so I had to resort to using an ethernet cable. Once I started using the ethernet cable, everything was working fine. I didn't know if I had a dud or if it is a problem with the line of cameras. Because of that, when I had a need for another camera, I decided to purchase another one. Unfortunately, that camera had similar issues.
I bought the new camera to sit inside my living room, pointing out the front window. Although I could run ethernet, I really wanted to use the wifi. The first camera didn't quite have line of site. The wifi signal did have to travel through the floor to get to the router, but the router was directly under the camera. For this second camera, I have a secondary access point in the living room. The access point is literally 10 feet away from the camera. The second camera does last longer than the first camera. The first camera would cut out after 5 minutes. The second camera at least lasts a few hours. I don't use the camera all the time, but if every time I want to use the camera I have to physically unplug and plug the power to the camera, then it does become kind of useless to me. On the plus side, since the baby's room already has ethernet, my wife is very excited about possible having a second camera pointing at the crib.
At this point, I can't recommend Foscam to anyone. I did try to give the brand a second chance, but it just didn't work out.
I bought the new camera to sit inside my living room, pointing out the front window. Although I could run ethernet, I really wanted to use the wifi. The first camera didn't quite have line of site. The wifi signal did have to travel through the floor to get to the router, but the router was directly under the camera. For this second camera, I have a secondary access point in the living room. The access point is literally 10 feet away from the camera. The second camera does last longer than the first camera. The first camera would cut out after 5 minutes. The second camera at least lasts a few hours. I don't use the camera all the time, but if every time I want to use the camera I have to physically unplug and plug the power to the camera, then it does become kind of useless to me. On the plus side, since the baby's room already has ethernet, my wife is very excited about possible having a second camera pointing at the crib.
At this point, I can't recommend Foscam to anyone. I did try to give the brand a second chance, but it just didn't work out.
Thursday, September 5, 2013
Considering the Minix Neo x7 over the MK802 IV
In a previous post, I talked about how the MK802 IV had the potential to go mainstream. I believe that the device has failed in that goal. Many reviews talk about problems with Netflix and other apps. Although I am currently a strong supporter of MK802's, that support is wavering. I'm considering for my next purchase a Minix Neo x7. These devices are far larger than MK802's, so they are not as portable, but they do over a lot of features. They have built in ethernet, an external antenna, more USB ports and a remote control. The x7 is the closest thing to a perfect device so far. It is missing one major feature. Right now, the firmware only has a 720p kernel. It will still output at 1080p, but it will be upscaled from 720p. For me, it is not that big of a deal. My bedroom TV is only 720p. I would really like to keep 1080p in my living room. The goal is to get this technology mainstream, however. The mainstream really loves their 1080p. It really is an essential feature. I understand that their are heating issues. Most Android mini-pc's don't have active cooling. The Neo x7 has a large enough case that I am surprised that they don't have a really large heatsink to help with the cooling. They might be able to squeeze in a small fan if needed. I think the x7 is the closest bet on Android TV's going mainstream.
Tuesday, September 3, 2013
USB IO Errors and RAID vs ZFS
About 8 years ago, I created a RAID-5 with 4 hard disks. Two of the disks were IDE and two where USB external disks. Each disk was 250GB, giving me 750GB. Everything seemed to work, so I moved a lot of my data onto the RAID. At some point, the RAID disappeared. I looked at the logs and the two USB external disks got flagged as being bad. Because you can't "lose" 2 disks in a RAID-5, I lost ALL of the data. Here is the thing: the disks weren't bad! That event is when I learned that USB hard disks return random IO errors. This IO errors are very small and are recoverable. The RAID software interpreted the IO errors as a bad sector, though. Once one of the sectors was bad, the entire hard disk was flagged as bad because it didn't have the same capacity as the other disks. It turns out backups are important.
After that incident, I played around with EVMS to create a pool out of the disks. The volume management allowed me to see some interesting effects. Whenever an IO error occurred with one of the USB disks, the available pool size shrank. Since the volume was smaller than the pool size, however, I didn't lose any data like I did with a raw RAID.
I am now playing with ZFS. I have a RAID-Z with SATA hard disks. I started getting IO errors on one of my hard disks. These are real errors related to my brand new 3TB WD drive going bad. I had a refurb 2TB Seagate but I had to plug it in via USB. I remembered my problems with USB and RAID-5, but this was only going to be one disk, not two. I plugged in the disk and replaced the files. Like 8 years ago, everything seemed to work fine. After about a day, the first IO error occurred. ZFS handled it just fine. The zpool status command tells you how many IO errors have occurred on the various disks. Because of the CRC capabilities, ZFS is able to handle the problem without marking the entire disk as bad. ZFS gives you the option to either ignore the errors or replace the bad disk. That is what makes a good solution!
After that incident, I played around with EVMS to create a pool out of the disks. The volume management allowed me to see some interesting effects. Whenever an IO error occurred with one of the USB disks, the available pool size shrank. Since the volume was smaller than the pool size, however, I didn't lose any data like I did with a raw RAID.
I am now playing with ZFS. I have a RAID-Z with SATA hard disks. I started getting IO errors on one of my hard disks. These are real errors related to my brand new 3TB WD drive going bad. I had a refurb 2TB Seagate but I had to plug it in via USB. I remembered my problems with USB and RAID-5, but this was only going to be one disk, not two. I plugged in the disk and replaced the files. Like 8 years ago, everything seemed to work fine. After about a day, the first IO error occurred. ZFS handled it just fine. The zpool status command tells you how many IO errors have occurred on the various disks. Because of the CRC capabilities, ZFS is able to handle the problem without marking the entire disk as bad. ZFS gives you the option to either ignore the errors or replace the bad disk. That is what makes a good solution!
Monday, September 2, 2013
When it rains, it pours (dead harddisks)
A few weeks ago was the end of a heat wave. During the heat wave a hard disk died. I RMA'ed that disk, but a week later, another hard disk died. So far, I have not lot much data. I manually stripe files across hard disks. When the first died, I re-striped. When the second died, I did lose some data. It got me a little worried, so I decided to create a ZFS pool with blocks on multiple disks. I started coping all of my data to the pool. Although that experience will be written up in a different blog post, I wanted to bring it up a little bit. ZFS has a WONDERFUL feature that allows it to CRC the entire pool to identify any problems. I ran the CRC a few days ago and discovered that a 3rd hard disk is dying! This one is a brand new disk that I purchased. The only data on it is the ZFS pool blocks, so I can replace it with a new disk, but this is getting kind of ridiculous.
Subscribe to:
Posts (Atom)