It seems like computers are becoming more and more pervasive. They are all over the place: smart phones, cars, GPS units, and many other devices. Basic computer technology is just so cheap. You can slap a processor, memory and/or flash memory into a very small space for a very cheap price. Some devices might need a screen or some form of input like buttons, but it just depends on what the device is intended for.
Where will the next big push go? I keep hearing about appliances like fridges with built in computers to tell you what's in your fridge and what you need to buy at the store. But these haven't really been widely available. Maybe it will be in clothing? Your show will be able to track and trend how many steps you take, and how far you go on a daily basis.
All I can assume is that computers will continue to be more and more available and used in our every day lives; and not just traditional desktops or laptops. How long until computers will be embedded in us?
A place to put all of my random thoughts about software development and computer programming.
October 29, 2009
October 28, 2009
MySQL vs Oracle vs Microsoft SQL Server
At a previous job I worked as a software develop supporting a set of data migration tools and frameworks. Most of the databases that we worked with were either Oracle or Microsoft SQL Server.We had a few occasions to work with MySQL or PostgreSQL, and even a few times that we were using "databases" like CSV files, and excel. I became somewhat familiar with the strength and features of Oracle and SQL Server, especially in the areas of high performance loading and extracting of data. Some of the starting and/or ending datasets were upwards of 200 GB, and with the tight scheduling constraints of doing data conversion we needed to do the conversions as quickly as possible, to minimize downtime between taking down an old system and starting up a new system.
Recently I've been working more with MySQL as a backend database for a number of different applications. Overall I really like it: it performs well, is easy to maintain, and the price is obviously right. But I have noticed that it takes a little more work to tweak query performance in MySQL than I remember in Oracle or SQL Server. If I remember correctly from my college days, relational algebra provides for a pretty good framework for reworking/optimizing queries. MySQL seems to either not do this at all, or do a really bad job. I have found myself on multiple occasions with a SQL query that performs much more poorly than I would expect. Sometimes there are index issues that require altering the databse in some way, but the end problem in most cases is that the query needs to be reworked to perform better. Switching a
SELECT * from tableA, tableB where tableA.field = tableB.field
to a
SELECT * from tableA join tableB on tableA.field = tableB.field
can make a huge difference. Shouldn't the query engine be able to determine that these are equivalent and adjust accordingly? Oracle and SQL Server seemed to be able to. There was very little I could do to improve query performance in most cases.
There are other cases where I've had to modify a query slightly to something that is functionally equivalent, but dramatically faster. I realize that there are many cases where a query engine/optimizer would not be able to easily find the best or even a better way to execute a query, but MySQL doesn't even seem to try. Does MySQL even have the notion of a query optimizer? Hopefully this is something on the radar for MySQL, because it seems like it wouldn't take to much work to get some pretty substantial wins for performance.
Recently I've been working more with MySQL as a backend database for a number of different applications. Overall I really like it: it performs well, is easy to maintain, and the price is obviously right. But I have noticed that it takes a little more work to tweak query performance in MySQL than I remember in Oracle or SQL Server. If I remember correctly from my college days, relational algebra provides for a pretty good framework for reworking/optimizing queries. MySQL seems to either not do this at all, or do a really bad job. I have found myself on multiple occasions with a SQL query that performs much more poorly than I would expect. Sometimes there are index issues that require altering the databse in some way, but the end problem in most cases is that the query needs to be reworked to perform better. Switching a
SELECT * from tableA, tableB where tableA.field = tableB.field
to a
SELECT * from tableA join tableB on tableA.field = tableB.field
can make a huge difference. Shouldn't the query engine be able to determine that these are equivalent and adjust accordingly? Oracle and SQL Server seemed to be able to. There was very little I could do to improve query performance in most cases.
There are other cases where I've had to modify a query slightly to something that is functionally equivalent, but dramatically faster. I realize that there are many cases where a query engine/optimizer would not be able to easily find the best or even a better way to execute a query, but MySQL doesn't even seem to try. Does MySQL even have the notion of a query optimizer? Hopefully this is something on the radar for MySQL, because it seems like it wouldn't take to much work to get some pretty substantial wins for performance.
October 27, 2009
WordPerfect vs. Microsoft Word
I know that this battle is long over, but I was trying to do some fairly complex layouts with Microsoft Word a few days ago, and I remembered how much I loved WordPerfect. For good or bad, Word has become ubiquitous. Fortunately almost all of the document I create are simple paragraphs with maybe a few headers and some lists, anything more complex than that and Word seems to fall apart. The last big, important document I worked on was a few years ago: my masters thesis. Rather than try LaTeX, I decided to stick with tried and true WordPerfect. I had heard to many horror stories of people trying to use word, and getting something just a little messed up, with no way to correct it. They would have to take there thesis content and recreate it in a new Word document.
Obviously this wasn't something I wanted to do. So I brought out my trusty copy of WordPerfect and went to work. I will admit that there were a few struggles to get the layout just right, but nothing that required huge amounts of time, or starting over. Other than the fact that WordPerfect just seems more intuitive for doing tables, diagrams, and images; it has reveal codes. For anyone not familiar with WordPerfect reveal codes it kind of like looking at HTML. I know Word has something that lets you look at style information, but it doesn't give you nearly the same informtion or control as reveal codes.
I also realize that something like reveal codes is not for everyone. I am a software developer so raw document codes like reveal codes, or HTML or C# don't scare me. In WordPerfect I have the best of both worlds: A really good WYSIWYG editor where I can do most of my work and an equally good view of the underlying document codes that lets me fine tune, fix, and tweak my documents. If I get a stubborn image that won't stay where I want, I can look at the detailed properties in the reveal codes to see why it isn't behaving like the rest of my images. In Word my best bet would be to remove the image, and add it to the document again, and cross my fingers that it works.
Obviously this wasn't something I wanted to do. So I brought out my trusty copy of WordPerfect and went to work. I will admit that there were a few struggles to get the layout just right, but nothing that required huge amounts of time, or starting over. Other than the fact that WordPerfect just seems more intuitive for doing tables, diagrams, and images; it has reveal codes. For anyone not familiar with WordPerfect reveal codes it kind of like looking at HTML. I know Word has something that lets you look at style information, but it doesn't give you nearly the same informtion or control as reveal codes.
I also realize that something like reveal codes is not for everyone. I am a software developer so raw document codes like reveal codes, or HTML or C# don't scare me. In WordPerfect I have the best of both worlds: A really good WYSIWYG editor where I can do most of my work and an equally good view of the underlying document codes that lets me fine tune, fix, and tweak my documents. If I get a stubborn image that won't stay where I want, I can look at the detailed properties in the reveal codes to see why it isn't behaving like the rest of my images. In Word my best bet would be to remove the image, and add it to the document again, and cross my fingers that it works.
October 26, 2009
Computer prices vs. Computer power
I just finished buying components for a new PC for my wife, and I surprise myself every time with how cheap computers are. For only about $500 I was able to build a quad core computer with 8 GB of memory and a GeForce 9500 with 512 MB of memory and a 500 GB hard drive. I think the first computer that I bought was a 486 DX2 with a 66 MHz processor and I don't even remember how much memory or disk space.
It seems like most people don't even have the need for expensive computers anymore. Most mainstream computers can handle almost anything you can throw at them. There will always be specific tasks that can take advantage of high end computers and workstations, but most people don't need these. From basic tasks like word processing and browsing the Internet, to high end computer games: most computers will handle the job just fine.
When I was younger I always used to dream about high-end super powerful computers. I'd go to a computer website like Dell and spec out the coolest most expensive computer I could. Most of those computers I dreamed about are less powerful than the computer I just bought. I almost think that computers are plateauing: I don't see the same rush to make a faster processor or build a bigger hard drive. Mots of the basic computer technology is more than sufficient for the next few years at least. It just seem like we need processors faster than 3 GHz, or disk drive greater than 1 TB for home desktop computers. What we need now is applications and operating systems that can take advantage of the power they have available to make our lives and jobs easier. Hopefully someone will take the job.
It seems like most people don't even have the need for expensive computers anymore. Most mainstream computers can handle almost anything you can throw at them. There will always be specific tasks that can take advantage of high end computers and workstations, but most people don't need these. From basic tasks like word processing and browsing the Internet, to high end computer games: most computers will handle the job just fine.
When I was younger I always used to dream about high-end super powerful computers. I'd go to a computer website like Dell and spec out the coolest most expensive computer I could. Most of those computers I dreamed about are less powerful than the computer I just bought. I almost think that computers are plateauing: I don't see the same rush to make a faster processor or build a bigger hard drive. Mots of the basic computer technology is more than sufficient for the next few years at least. It just seem like we need processors faster than 3 GHz, or disk drive greater than 1 TB for home desktop computers. What we need now is applications and operating systems that can take advantage of the power they have available to make our lives and jobs easier. Hopefully someone will take the job.
October 24, 2009
C# 4.0 and Visual Studio 2010
I've started looking into C# 4.0, Microsoft .NET framework 4.0, and Visual Studio 2010, hoping there will be some cool new features that I will like. I've done very little with some of the new features of .NET 3.0 and 3.5, and didn't notice much new with Visual Studio 2008. Here are some of the things I've seen that look intersting:
- Parallel Extensions for the .NET Framework: Now that most computers have 2-4 processors it is become more apparent that parallel programming will be going mainstream. Instead of being relegated to high end scientific and business applications on large supercomputers and distributed systems, every day programmers are going to need to know and use techniques for parallel programming. These extensions look like they are a step in the right direction. When combined with solid software development practices, these can get a developer headed in the right direction to be able to easily take advantage of the multiple cores available in computers today.
- C# optional parameters: One thing that I do miss from C++ is finally making it's way to C#. I can't count the number of times that I have had to create multiple different variations of a function, just to be able to mimic the capabilities of optional parameters. What could be 3 or 4 functions with slight variations in parameters, can now become a single method definition. Easier to maintain, easier to use, and much more convienenient. Plus by allowing named arguments, you don't even need to specify all parameters from left to right, you can pick and choose which parameters you want to set, when yo call the function.
- Static ID's for ASP .NET controls: I've always wondered why Microsoft decided to enforce their control naming on all ASP .NET developers. I could understand if the naming standard was the default, because it does enforce that all of the names are unique, but we are finally getting a way to specify the name we want. This will make my like so much easier, especially for JavaScript code and forms post-back. With multiple nested master pages and containers, the length and complexity of names for controls is ridiculous. I've actually had a few cases in JavaScript that I've had to create a lookup variable to map my usable names to the actual control names.
- Dynamic Programming and Dynamic Variables: I have to admit that I haven't done anything with the dynamic languages and features that already exist in .NET, and I don't intend to start now. I tend to prefer the enforced structure and design of normal development, but it is nice to know it is available if I want to give it a try.
October 23, 2009
Is Linux the operating system of the future?
I'm been using different variations of Linux for 10-15 years, and there seem like there have been some pretty dramatic improvements over that time. As a server operating system Linux is great, it has many advantages over Windows, but Windows has many advantages over Linux. Both operating systems can be good choices, depending on your needs and requirements.
I'm more interested in the desktop side of things. Linux still hasn't made many inroads on desktops. Many techies have dual boot installations with both Windows and Linux, and some ultra cheap computers come with Linux. Some companies and government institutions have converted wholesale to Linux. But most of these case are the exceptions to the rule. Windows is by far the most popular desktop operating system.
I think that the biggest reason for this is compatibility: all of the applications that people use work on windows, everyone else uses windows, everyone knows windows. Linux has a huge hurdle to overcome to be able to compete like Windows. Even if Linux may be a superior technology and run more efficiently, and even if it's GUI is comparable to the UI on Windows: it doesn't have the same value has Windows.
When you select windows as your operating system you don't have to worry about training, you don't have to worry as much about application incompatibilities, you don't have to worry if your employees will be able to interact with external parties. But with Linux many or all of these can be potential issues that will require time and money to overcome. I don't think that there are any significant flaws in Linux. It isn't perfect, but Windows isn't either. But even with all of the things that Linux has going for it, it doesn't have the momentum to really beat Windows. At least not yet...
I'm more interested in the desktop side of things. Linux still hasn't made many inroads on desktops. Many techies have dual boot installations with both Windows and Linux, and some ultra cheap computers come with Linux. Some companies and government institutions have converted wholesale to Linux. But most of these case are the exceptions to the rule. Windows is by far the most popular desktop operating system.
I think that the biggest reason for this is compatibility: all of the applications that people use work on windows, everyone else uses windows, everyone knows windows. Linux has a huge hurdle to overcome to be able to compete like Windows. Even if Linux may be a superior technology and run more efficiently, and even if it's GUI is comparable to the UI on Windows: it doesn't have the same value has Windows.
When you select windows as your operating system you don't have to worry about training, you don't have to worry as much about application incompatibilities, you don't have to worry if your employees will be able to interact with external parties. But with Linux many or all of these can be potential issues that will require time and money to overcome. I don't think that there are any significant flaws in Linux. It isn't perfect, but Windows isn't either. But even with all of the things that Linux has going for it, it doesn't have the momentum to really beat Windows. At least not yet...
October 22, 2009
Why are netbooks so popular?
I've been wondering lately why netbooks are becoming so popular. I've always seen these devices as a small niche market between smart phones and real latops. They seem to be smack dab in the middle of the two worlds, with few of the advantages of either. Smart phones are ultra portable - they fit in you pocket. They may lack in processing power and screen real estate, but you can take them everywhere. Laptops are not as portable but you can still take them with you on the go. They can have considerable processing power and good screen size, so they are still very useful computing platforms.
Netbooks seem to fall in the middle. They are small but not small enough to carry in your pocket, seems like you would still need a briefcase or bag to carry them around in. They don't seem much more powerful than a smartphone and have considerably less screen real estate than a laptop.
Maybe I'm just not the target audience. I either want ultimate portability, where I'm willing to sacrifice performance; or I want ultimate power so I can get real work done. But I'm a software developer and a techie, so I probably have different needs than the average Joe.
Netbooks seem to fall in the middle. They are small but not small enough to carry in your pocket, seems like you would still need a briefcase or bag to carry them around in. They don't seem much more powerful than a smartphone and have considerably less screen real estate than a laptop.
Maybe I'm just not the target audience. I either want ultimate portability, where I'm willing to sacrifice performance; or I want ultimate power so I can get real work done. But I'm a software developer and a techie, so I probably have different needs than the average Joe.
October 17, 2009
Code Profiler for .NET
I've always been interested in profiling my C# code. Years ago with .NET 1.1 I used the DevPartner Profiler Community Edition, which is no longer available. Ever since then I have been unable to find a good free or open source solution for profiling .NET code. I know that there are some decent commercial products out there, but I'm cheap and I don't the tools often enough to merit purchasing them.
Is code profiling just not in demand? It seems like if enough people were interested in the value of profiling, then there would be at least one decent open source solution. For me it has been fun on occasion to really dig deep into an algorithm that I'm working on. Trying to eek just a little more performance out of it. I've found that disassembling the code also helps to see what is actually happening behind the scenes. One of the nice things about profiling .NET code is that you don't need to instrument the code manually. When I used the DevPartner ProfilerProfiler I just picked the options I wanted and clicked go. After running the application I could delve into the details of which functions were being hit the most, and even which lines of code were consuming the most time. It can be a challenge to tune the performance of an algorithm or an application, but it can be rewarding to the code double in speed, or even more.
Hopefully I'll be able to find a good open source code profiler to "get my fix" on performance tuning my code.
Is code profiling just not in demand? It seems like if enough people were interested in the value of profiling, then there would be at least one decent open source solution. For me it has been fun on occasion to really dig deep into an algorithm that I'm working on. Trying to eek just a little more performance out of it. I've found that disassembling the code also helps to see what is actually happening behind the scenes. One of the nice things about profiling .NET code is that you don't need to instrument the code manually. When I used the DevPartner ProfilerProfiler I just picked the options I wanted and clicked go. After running the application I could delve into the details of which functions were being hit the most, and even which lines of code were consuming the most time. It can be a challenge to tune the performance of an algorithm or an application, but it can be rewarding to the code double in speed, or even more.
Hopefully I'll be able to find a good open source code profiler to "get my fix" on performance tuning my code.
October 16, 2009
Weird Javascript and AJAX errors
I recently implemented some JavaScript logging on one of the web sites that I work on. Something like this: Using XMLHttpRequest to log JavaScript errors. It is working well and helping me uncover errors in my code, but many of the remaining errors that I see make little or no sense. Here is a list of some of the oddities that I have seen, that have been unable to reproduce in my development environment since that are so rare and sporadic:
- There are times that certain JavaScript functions and variables cannot be found. Many of these are defined in external JavaScript files. In the case of Firefox I see errors when an external file fails to load for some reason. Internet Explorer give no such indication, but I have to assume that the same thing is happening. The Firefox errors do not give any details as to why the file failed to download.
- Sometimes the server side logging gets blank errors. So somehow my logging page gets hit with no data, this shouldn't be happening.
- The most frequent AJAX error is when making an AJAX call, the data returned is incomplete. I know that the readyState property is set to 4 and the status is 200, but looking at the actual length of the data (from the Content-length in the header) and comparing it to the length of the data in responseText, some data is missing. Sometimes it is almost the right size but many times it is only a fraction of the expected size. This is even after taking into account the fact that the data is UTF-8 encoded. The data size can be anywhere from 20 to 30KB, so I have wondered if the amount of data may be a contributing factor.
- The other AJAX error is non-standard status codes. With FireFox I see responses of 0, and Internet Explorer I see the infamous 12000 error codes like 12019 12029 12030 and others.
- I haven't found anything definitive to help when files fail to load, but I am going to enable gzip compression for javascript files on IIS 7 to see if it might be to due to slow connections timing out. Hopefully the smaller file size will help these requests succeed more often, but this is not a complete solution. I expect to continue to see this problem.
- This one has me stumped. The JavaScript logging code should always be passing an error message, with the error data, even the JavaScript error handling has no content. I have no idea why these would come back blank. Maybe the request is timing out, or I have an issue with the server side logging code not waiting until all of the data is ready.
- I haven't had much luck with this one either. So far I have added some retry code, so if I get a failure, I just try again. This appears to work about 75% of the time, but I currently limit it to 1 retry, so I still see some failures. This also seems like a less than ideal solution, but maybe it is the best I can do.
- Same as number 3, I just try the request again, and is succeeds about 75% of the time. The requests are over HTTPS and some information I've found indicate that this might be a problem with Internet Explorer trying to reuse connections and failing, but I have not tried adding the Connection: Close header yet.
October 13, 2009
C# vs. C programming
For quite a few years now I've been working primarily with C# and Microsoft .NET, and I have to admit that even with it's shortcomings I would list them as my preferred programming language and development framework. I will admit that there are still cases where C and C++ are better/faster/etc., but overall I find that when I use C# I'm more productive, my code has fewer errors, and it is easier to maintain.
At my current job I get to work in both worlds. Most of the newer software we write is in C#, but we still have a pretty expansive set of libraries and applications that are in C++, we even have one that is in managed C++ (which has it's own set of problems). I always prefer working on the C# side of things, and even dread working with some of our C++ applications.
I know that for most people this is an almost religious topic, and I don't want to come across as a zealot, I just have my preferences. I've used C++ quite extensively and it is a great language, but C# builds on the long history of C and C++ and adds more than a few nice features. And since it is built on a decent framework (.NET) there is a greater consistency to code. When you change jobs in a c++ environment, you probably have to learn a new set of frameworks. Some companies use in house libraries, some use boost, and others use something else. With C# most of the basic framework pieces come built in. There will always be a need for other frameworks beyond that, but .NET comes with most of the necessities.
There are many other features and helpful things that come with C# and .NET, but my overall view is that when I use them, I am more productive overall, and that is money in the bank to me.
At my current job I get to work in both worlds. Most of the newer software we write is in C#, but we still have a pretty expansive set of libraries and applications that are in C++, we even have one that is in managed C++ (which has it's own set of problems). I always prefer working on the C# side of things, and even dread working with some of our C++ applications.
I know that for most people this is an almost religious topic, and I don't want to come across as a zealot, I just have my preferences. I've used C++ quite extensively and it is a great language, but C# builds on the long history of C and C++ and adds more than a few nice features. And since it is built on a decent framework (.NET) there is a greater consistency to code. When you change jobs in a c++ environment, you probably have to learn a new set of frameworks. Some companies use in house libraries, some use boost, and others use something else. With C# most of the basic framework pieces come built in. There will always be a need for other frameworks beyond that, but .NET comes with most of the necessities.
There are many other features and helpful things that come with C# and .NET, but my overall view is that when I use them, I am more productive overall, and that is money in the bank to me.
October 3, 2009
Agile Software Development
There are many different software development methodologies that are practiced today. One of the popular choices today is Agile Software Development. When I do software development I generally use agile techniques, but I wouldn't consider myself an agile purist. Where I work, we use scrum meetings, and very quick development cycles, and a few other agile ideas. But we don't use every agile technique.
I assume most people and businesses do this, but I try to be familiar with many different methodologies and practices as possible. I try to use the ideas and techniques that best fit the situation at hand. If I'm doing a large scale project, I try to do more work gathering requirements upfront, but if the project is much smaller I may just sit down with the project owner for a quick discussion and start designing and implementing from that.
I know some people are much more religious about this, and the idea of mixing and matching between different methodologies would be heresy, but it really does work. There are times that it is good to be strict and keep with consistent policies and procedures, but there seem to be many more occasions where flexibility is king. Within certain constraints and with a good understanding of software development, it can be very advantageous to be flexible. In days with tight schedules, limited resources, and never ending requirements we must do what we can to thrive and create great software.
I assume most people and businesses do this, but I try to be familiar with many different methodologies and practices as possible. I try to use the ideas and techniques that best fit the situation at hand. If I'm doing a large scale project, I try to do more work gathering requirements upfront, but if the project is much smaller I may just sit down with the project owner for a quick discussion and start designing and implementing from that.
I know some people are much more religious about this, and the idea of mixing and matching between different methodologies would be heresy, but it really does work. There are times that it is good to be strict and keep with consistent policies and procedures, but there seem to be many more occasions where flexibility is king. Within certain constraints and with a good understanding of software development, it can be very advantageous to be flexible. In days with tight schedules, limited resources, and never ending requirements we must do what we can to thrive and create great software.
Labels:
agile,
management,
software design,
software development
October 2, 2009
Death to Internet Explorer 6
I personally think that Internet Explorer 6 should be outlawed. Web development can be difficult enough to make things look good and work right, that throwing Internet Explorer 6.0 into the mix just makes things that much harder. Even when I'm doing ASP .NET development, you would think that everything would work well with IE 6, but that is not the case.
I spend most of my time using Firefox to test my sites, then do some quick checks in either IE 7 or IE 8, depending on what is installed on the computer, and in most cases things look and work pretty well. Sometimes there may be a few tweaks necessary to get things just right. After that I have to spin up a virtual machine, or find an old computer with IE 6 on it. And that is where the fun begins.
Web page layouts never quite look right, IE 6 never really seems to do what you've told it to. It selectively ignores CSS and re-sizes things how it wants. The internet is abound with IE 6 CSS hacks. Functionality seems to have just as many problems. Basic JavaScript is hit or miss, it might work just fine, or it might decide to be your worst enemy. Anything more complex like AJAX is almost a lost cause. You might as well develop and maintain two separate websites: one for real web browsers and another for IE 6.
Maybe I'm being a little hard on the browser, but it really is a web developers worst nightmare. If there were only a few computers out there that still had IE 6, that would be one thing. But there is still a large portion of computers that run IE 6 as their primary browser. I defintely favor Firefox, but I don't mind if people want to use IE 7 or IE 8, just not IE 6. We should all wish it a fond farewell, and retire the old chap already.
I spend most of my time using Firefox to test my sites, then do some quick checks in either IE 7 or IE 8, depending on what is installed on the computer, and in most cases things look and work pretty well. Sometimes there may be a few tweaks necessary to get things just right. After that I have to spin up a virtual machine, or find an old computer with IE 6 on it. And that is where the fun begins.
Web page layouts never quite look right, IE 6 never really seems to do what you've told it to. It selectively ignores CSS and re-sizes things how it wants. The internet is abound with IE 6 CSS hacks. Functionality seems to have just as many problems. Basic JavaScript is hit or miss, it might work just fine, or it might decide to be your worst enemy. Anything more complex like AJAX is almost a lost cause. You might as well develop and maintain two separate websites: one for real web browsers and another for IE 6.
Maybe I'm being a little hard on the browser, but it really is a web developers worst nightmare. If there were only a few computers out there that still had IE 6, that would be one thing. But there is still a large portion of computers that run IE 6 as their primary browser. I defintely favor Firefox, but I don't mind if people want to use IE 7 or IE 8, just not IE 6. We should all wish it a fond farewell, and retire the old chap already.
Labels:
ASP .NET,
Internet Explorer,
software development,
Windows
October 1, 2009
.NET Code Coverage
I've been looking for a good, free code coverage tool for .NET for quite a while. I know that years ago NCover used to be pretty good, but the open source version appears to be dead, replaced by a commercial version. The old version still exists and works, but it's pretty outdated. Recently I've found PartCover but haven't had a chance to thoroughly try it out. Beyond that, I haven't been able able to find anything else that is open source or even free. But neither of these two solutions appear to have lots of active development going on, which I consider a pretty important metric when looking at adopting an open source tool or framework.
It seems surprising that their isn't more activity in the open source world in this area. There seems to be many other active communities in the open source world about C# and .NET. There are many projects like NUnit and NHibernate that are actively developed and extremely helpful. But there doesn't seem to be much open source activity about code coverage. Is this because people find that the commercial options available work well at a reasonable price? Or do people just not put much importance on code coverage?
I think that code coverage receives less attention than many other software development practices like unit testing, but it still seems like it would get more focus that it currently does. I hope that there is a code coverage tool out there that I just can't find, but I'm not holding my breath.
It seems surprising that their isn't more activity in the open source world in this area. There seems to be many other active communities in the open source world about C# and .NET. There are many projects like NUnit and NHibernate that are actively developed and extremely helpful. But there doesn't seem to be much open source activity about code coverage. Is this because people find that the commercial options available work well at a reasonable price? Or do people just not put much importance on code coverage?
I think that code coverage receives less attention than many other software development practices like unit testing, but it still seems like it would get more focus that it currently does. I hope that there is a code coverage tool out there that I just can't find, but I'm not holding my breath.
Labels:
ASP .NET,
code coverage,
open source,
software development
Subscribe to:
Posts (Atom)