A place to put all of my random thoughts about software development and computer programming.
December 1, 2009
Holiday Sale Madness
It seems like almost everything is on sale this holiday season. Especially in the electronics arena. I've seen laptops, desktops, HDTV's, Blu-ray players, and almost everything else you can think of on sale. Overall we all know that the prices of most technology products goes down over time, and the quality/feature set goes up. But these price changes happen in fits and bouts. Prices don't just go down slowly over time, they seem to jump up and down, with the overall trend being a gradual decline. Overall I think this is a good thing for the consumer, but personally it would be nice to see consistent price decreases. Sometimes I think it is frustrating to feel like I need to buy something now, instead of waiting a few months, just because I think I will get a better deal (But maybe that's just a personal problem). Ideally I'd like to know that when I walk into a store or buy something online, that I'm getting the best possible price, independent of which day I buy it on.
November 21, 2009
Handheld computers
My wife has been trying to decide on some sort of hand held computer/device. At first she wanted an iPhone, then realized the monthly costs weren't worth it. Then she wanted an iPod touch, because it can do many of the same things. Then we stopped by a store and she noticed an Archos internet media tablet. It's got a bigger screen than an iPod touch and some versions appear to run the Android operating system. Now I have the fun task to see if any of these options will actually do what she wants, and then convince her what will be best. Isn't it fun being the local tech guru.
November 11, 2009
Business Requirements vs. Software Design
In the world of software development there seems to be two things that are always causing conflict. As a software developer I always want to do software projects "right." Most of the time this can mean significant time invested in infrastructure, design, architecture, and/or new code. As an executive I realize the value of getting things done as quickly as possible, but still with the right feature set.Both of these approaches can be valuable, and when balanced properly can lead to extremely valuable software.
On the software development side it always seems like it would be nice to have unlimited resources and unlimited time. If this were the case it seems like we could always come up with well designed, modular, maintainable software, that fills all of the needs of our customer. There would be time to do research, try out new technologies, and all of the other "cool" stuff that software developers like to do. Old code could be refactored to fit new requirements, new code would be well designed, fully unit tested and flexible enough to fit future needs.
On the business side all of these things are valuable, but there are many other things that come into play, like time to market opportunity costs, etc. Many times, the quicker you can get a product to market, the sooner you can see the response of customers and adjust/adapt accordingly. Taking another 4-6 months to get a product out may mean that a competitor is that much further ahead of you.
When these two ideas come together with the right mix, great things can happen. If developers are given enough time and resources to do a good job, then the software will have many of the same qualities as if the team had unlimited time. Of course the terms "enough time and resources" and "good job" are up for interpretation, and vary per project, but most teams should roughly know how much time they should spend for things to be good enough. Not perfect, just good enough. This means that people on the business side need to be understanding, yet apply appropriate pressure to make sure things are progressing.
When this all comes together teams can create great products. They won't be perfect, and they won't be quick, but they will fit the business need as well as possible and still not be just a jumbled mess of unmaintainable code. It can be hard to find the right balance, but it's worth striving for because it makes everyone happy: your customers, software developers and the people in charge. And you might even make a little money along the way.
On the software development side it always seems like it would be nice to have unlimited resources and unlimited time. If this were the case it seems like we could always come up with well designed, modular, maintainable software, that fills all of the needs of our customer. There would be time to do research, try out new technologies, and all of the other "cool" stuff that software developers like to do. Old code could be refactored to fit new requirements, new code would be well designed, fully unit tested and flexible enough to fit future needs.
On the business side all of these things are valuable, but there are many other things that come into play, like time to market opportunity costs, etc. Many times, the quicker you can get a product to market, the sooner you can see the response of customers and adjust/adapt accordingly. Taking another 4-6 months to get a product out may mean that a competitor is that much further ahead of you.
When these two ideas come together with the right mix, great things can happen. If developers are given enough time and resources to do a good job, then the software will have many of the same qualities as if the team had unlimited time. Of course the terms "enough time and resources" and "good job" are up for interpretation, and vary per project, but most teams should roughly know how much time they should spend for things to be good enough. Not perfect, just good enough. This means that people on the business side need to be understanding, yet apply appropriate pressure to make sure things are progressing.
When this all comes together teams can create great products. They won't be perfect, and they won't be quick, but they will fit the business need as well as possible and still not be just a jumbled mess of unmaintainable code. It can be hard to find the right balance, but it's worth striving for because it makes everyone happy: your customers, software developers and the people in charge. And you might even make a little money along the way.
November 6, 2009
How to Make your Computer Faster
Don't we all want to make our computers run faster? How can you speed up your computer? Here are a few tips for everyone from novice to expert:
- Uninstall programs that you don't need or don't use: This can be a huge, easy win, because not only do these programs take up space, most of the time they are doing things while your computers is running, slowing it down.
- De-fragment your hard drive: This can realign the files on your disk, so that programs can start faster and you can open up your files quicker.
- Reinstall everything on your computer: If your computer seems to be a lot slower than it used to, then you probably have too many things on it, and it might be best to just start over. I generally try to do this on my computers every 18 months or so. This will require more than just basic computer skills to do it right.
- Turn off unused services and start-up programs: This can help quite a bit if your computer seems sluggish, but this is not for the faint of heart. If you turn off the wrong programs, you can end up with major problems and break everything.
- Add more memory: This can be a cheap easy fix that will help some computers, but not all. Also not all computers will have room for more memory and most operating systems can only use 3-4 GB of memory.
- Buy a new computer: Especially if your computer is more than 2 or 3 years old, your best bet may be to just buy a new one. Chances are that a brand new $300-400 computer will be way faster than whatever else you have.
November 2, 2009
ASP .NET Model View Controller
Recently I've started looking into ASP.NET MVC. Over the last few years I've become pretty familiar with standard ASP .NET, so I've wanted to see what Model View Controller has to offer. At first glance it seemed to address some of the shortcomings of standard ASP .NET, and gives the developer more control of the HTML that is output to the browser. Because of the architecture, the code can be much easier to test as well.
The biggest shortcoming I've seen is that there are not many built in components, so not only can you control the HTML that is output, it appears that you have to do it manually. One of the really nice things about ASP .NET has been the ability to drag and drop controls that, in most cases, just work out of the box. For the development that I've done this has been a tremendous time saver. Until I see something promising in this area, I probably won't look too much as using or switching to ASP .NET MVC.
The biggest shortcoming I've seen is that there are not many built in components, so not only can you control the HTML that is output, it appears that you have to do it manually. One of the really nice things about ASP .NET has been the ability to drag and drop controls that, in most cases, just work out of the box. For the development that I've done this has been a tremendous time saver. Until I see something promising in this area, I probably won't look too much as using or switching to ASP .NET MVC.
October 29, 2009
Computers Everywhere
It seems like computers are becoming more and more pervasive. They are all over the place: smart phones, cars, GPS units, and many other devices. Basic computer technology is just so cheap. You can slap a processor, memory and/or flash memory into a very small space for a very cheap price. Some devices might need a screen or some form of input like buttons, but it just depends on what the device is intended for.
Where will the next big push go? I keep hearing about appliances like fridges with built in computers to tell you what's in your fridge and what you need to buy at the store. But these haven't really been widely available. Maybe it will be in clothing? Your show will be able to track and trend how many steps you take, and how far you go on a daily basis.
All I can assume is that computers will continue to be more and more available and used in our every day lives; and not just traditional desktops or laptops. How long until computers will be embedded in us?
Where will the next big push go? I keep hearing about appliances like fridges with built in computers to tell you what's in your fridge and what you need to buy at the store. But these haven't really been widely available. Maybe it will be in clothing? Your show will be able to track and trend how many steps you take, and how far you go on a daily basis.
All I can assume is that computers will continue to be more and more available and used in our every day lives; and not just traditional desktops or laptops. How long until computers will be embedded in us?
October 28, 2009
MySQL vs Oracle vs Microsoft SQL Server
At a previous job I worked as a software develop supporting a set of data migration tools and frameworks. Most of the databases that we worked with were either Oracle or Microsoft SQL Server.We had a few occasions to work with MySQL or PostgreSQL, and even a few times that we were using "databases" like CSV files, and excel. I became somewhat familiar with the strength and features of Oracle and SQL Server, especially in the areas of high performance loading and extracting of data. Some of the starting and/or ending datasets were upwards of 200 GB, and with the tight scheduling constraints of doing data conversion we needed to do the conversions as quickly as possible, to minimize downtime between taking down an old system and starting up a new system.
Recently I've been working more with MySQL as a backend database for a number of different applications. Overall I really like it: it performs well, is easy to maintain, and the price is obviously right. But I have noticed that it takes a little more work to tweak query performance in MySQL than I remember in Oracle or SQL Server. If I remember correctly from my college days, relational algebra provides for a pretty good framework for reworking/optimizing queries. MySQL seems to either not do this at all, or do a really bad job. I have found myself on multiple occasions with a SQL query that performs much more poorly than I would expect. Sometimes there are index issues that require altering the databse in some way, but the end problem in most cases is that the query needs to be reworked to perform better. Switching a
SELECT * from tableA, tableB where tableA.field = tableB.field
to a
SELECT * from tableA join tableB on tableA.field = tableB.field
can make a huge difference. Shouldn't the query engine be able to determine that these are equivalent and adjust accordingly? Oracle and SQL Server seemed to be able to. There was very little I could do to improve query performance in most cases.
There are other cases where I've had to modify a query slightly to something that is functionally equivalent, but dramatically faster. I realize that there are many cases where a query engine/optimizer would not be able to easily find the best or even a better way to execute a query, but MySQL doesn't even seem to try. Does MySQL even have the notion of a query optimizer? Hopefully this is something on the radar for MySQL, because it seems like it wouldn't take to much work to get some pretty substantial wins for performance.
Recently I've been working more with MySQL as a backend database for a number of different applications. Overall I really like it: it performs well, is easy to maintain, and the price is obviously right. But I have noticed that it takes a little more work to tweak query performance in MySQL than I remember in Oracle or SQL Server. If I remember correctly from my college days, relational algebra provides for a pretty good framework for reworking/optimizing queries. MySQL seems to either not do this at all, or do a really bad job. I have found myself on multiple occasions with a SQL query that performs much more poorly than I would expect. Sometimes there are index issues that require altering the databse in some way, but the end problem in most cases is that the query needs to be reworked to perform better. Switching a
SELECT * from tableA, tableB where tableA.field = tableB.field
to a
SELECT * from tableA join tableB on tableA.field = tableB.field
can make a huge difference. Shouldn't the query engine be able to determine that these are equivalent and adjust accordingly? Oracle and SQL Server seemed to be able to. There was very little I could do to improve query performance in most cases.
There are other cases where I've had to modify a query slightly to something that is functionally equivalent, but dramatically faster. I realize that there are many cases where a query engine/optimizer would not be able to easily find the best or even a better way to execute a query, but MySQL doesn't even seem to try. Does MySQL even have the notion of a query optimizer? Hopefully this is something on the radar for MySQL, because it seems like it wouldn't take to much work to get some pretty substantial wins for performance.
October 27, 2009
WordPerfect vs. Microsoft Word
I know that this battle is long over, but I was trying to do some fairly complex layouts with Microsoft Word a few days ago, and I remembered how much I loved WordPerfect. For good or bad, Word has become ubiquitous. Fortunately almost all of the document I create are simple paragraphs with maybe a few headers and some lists, anything more complex than that and Word seems to fall apart. The last big, important document I worked on was a few years ago: my masters thesis. Rather than try LaTeX, I decided to stick with tried and true WordPerfect. I had heard to many horror stories of people trying to use word, and getting something just a little messed up, with no way to correct it. They would have to take there thesis content and recreate it in a new Word document.
Obviously this wasn't something I wanted to do. So I brought out my trusty copy of WordPerfect and went to work. I will admit that there were a few struggles to get the layout just right, but nothing that required huge amounts of time, or starting over. Other than the fact that WordPerfect just seems more intuitive for doing tables, diagrams, and images; it has reveal codes. For anyone not familiar with WordPerfect reveal codes it kind of like looking at HTML. I know Word has something that lets you look at style information, but it doesn't give you nearly the same informtion or control as reveal codes.
I also realize that something like reveal codes is not for everyone. I am a software developer so raw document codes like reveal codes, or HTML or C# don't scare me. In WordPerfect I have the best of both worlds: A really good WYSIWYG editor where I can do most of my work and an equally good view of the underlying document codes that lets me fine tune, fix, and tweak my documents. If I get a stubborn image that won't stay where I want, I can look at the detailed properties in the reveal codes to see why it isn't behaving like the rest of my images. In Word my best bet would be to remove the image, and add it to the document again, and cross my fingers that it works.
Obviously this wasn't something I wanted to do. So I brought out my trusty copy of WordPerfect and went to work. I will admit that there were a few struggles to get the layout just right, but nothing that required huge amounts of time, or starting over. Other than the fact that WordPerfect just seems more intuitive for doing tables, diagrams, and images; it has reveal codes. For anyone not familiar with WordPerfect reveal codes it kind of like looking at HTML. I know Word has something that lets you look at style information, but it doesn't give you nearly the same informtion or control as reveal codes.
I also realize that something like reveal codes is not for everyone. I am a software developer so raw document codes like reveal codes, or HTML or C# don't scare me. In WordPerfect I have the best of both worlds: A really good WYSIWYG editor where I can do most of my work and an equally good view of the underlying document codes that lets me fine tune, fix, and tweak my documents. If I get a stubborn image that won't stay where I want, I can look at the detailed properties in the reveal codes to see why it isn't behaving like the rest of my images. In Word my best bet would be to remove the image, and add it to the document again, and cross my fingers that it works.
October 26, 2009
Computer prices vs. Computer power
I just finished buying components for a new PC for my wife, and I surprise myself every time with how cheap computers are. For only about $500 I was able to build a quad core computer with 8 GB of memory and a GeForce 9500 with 512 MB of memory and a 500 GB hard drive. I think the first computer that I bought was a 486 DX2 with a 66 MHz processor and I don't even remember how much memory or disk space.
It seems like most people don't even have the need for expensive computers anymore. Most mainstream computers can handle almost anything you can throw at them. There will always be specific tasks that can take advantage of high end computers and workstations, but most people don't need these. From basic tasks like word processing and browsing the Internet, to high end computer games: most computers will handle the job just fine.
When I was younger I always used to dream about high-end super powerful computers. I'd go to a computer website like Dell and spec out the coolest most expensive computer I could. Most of those computers I dreamed about are less powerful than the computer I just bought. I almost think that computers are plateauing: I don't see the same rush to make a faster processor or build a bigger hard drive. Mots of the basic computer technology is more than sufficient for the next few years at least. It just seem like we need processors faster than 3 GHz, or disk drive greater than 1 TB for home desktop computers. What we need now is applications and operating systems that can take advantage of the power they have available to make our lives and jobs easier. Hopefully someone will take the job.
It seems like most people don't even have the need for expensive computers anymore. Most mainstream computers can handle almost anything you can throw at them. There will always be specific tasks that can take advantage of high end computers and workstations, but most people don't need these. From basic tasks like word processing and browsing the Internet, to high end computer games: most computers will handle the job just fine.
When I was younger I always used to dream about high-end super powerful computers. I'd go to a computer website like Dell and spec out the coolest most expensive computer I could. Most of those computers I dreamed about are less powerful than the computer I just bought. I almost think that computers are plateauing: I don't see the same rush to make a faster processor or build a bigger hard drive. Mots of the basic computer technology is more than sufficient for the next few years at least. It just seem like we need processors faster than 3 GHz, or disk drive greater than 1 TB for home desktop computers. What we need now is applications and operating systems that can take advantage of the power they have available to make our lives and jobs easier. Hopefully someone will take the job.
October 24, 2009
C# 4.0 and Visual Studio 2010
I've started looking into C# 4.0, Microsoft .NET framework 4.0, and Visual Studio 2010, hoping there will be some cool new features that I will like. I've done very little with some of the new features of .NET 3.0 and 3.5, and didn't notice much new with Visual Studio 2008. Here are some of the things I've seen that look intersting:
- Parallel Extensions for the .NET Framework: Now that most computers have 2-4 processors it is become more apparent that parallel programming will be going mainstream. Instead of being relegated to high end scientific and business applications on large supercomputers and distributed systems, every day programmers are going to need to know and use techniques for parallel programming. These extensions look like they are a step in the right direction. When combined with solid software development practices, these can get a developer headed in the right direction to be able to easily take advantage of the multiple cores available in computers today.
- C# optional parameters: One thing that I do miss from C++ is finally making it's way to C#. I can't count the number of times that I have had to create multiple different variations of a function, just to be able to mimic the capabilities of optional parameters. What could be 3 or 4 functions with slight variations in parameters, can now become a single method definition. Easier to maintain, easier to use, and much more convienenient. Plus by allowing named arguments, you don't even need to specify all parameters from left to right, you can pick and choose which parameters you want to set, when yo call the function.
- Static ID's for ASP .NET controls: I've always wondered why Microsoft decided to enforce their control naming on all ASP .NET developers. I could understand if the naming standard was the default, because it does enforce that all of the names are unique, but we are finally getting a way to specify the name we want. This will make my like so much easier, especially for JavaScript code and forms post-back. With multiple nested master pages and containers, the length and complexity of names for controls is ridiculous. I've actually had a few cases in JavaScript that I've had to create a lookup variable to map my usable names to the actual control names.
- Dynamic Programming and Dynamic Variables: I have to admit that I haven't done anything with the dynamic languages and features that already exist in .NET, and I don't intend to start now. I tend to prefer the enforced structure and design of normal development, but it is nice to know it is available if I want to give it a try.
October 23, 2009
Is Linux the operating system of the future?
I'm been using different variations of Linux for 10-15 years, and there seem like there have been some pretty dramatic improvements over that time. As a server operating system Linux is great, it has many advantages over Windows, but Windows has many advantages over Linux. Both operating systems can be good choices, depending on your needs and requirements.
I'm more interested in the desktop side of things. Linux still hasn't made many inroads on desktops. Many techies have dual boot installations with both Windows and Linux, and some ultra cheap computers come with Linux. Some companies and government institutions have converted wholesale to Linux. But most of these case are the exceptions to the rule. Windows is by far the most popular desktop operating system.
I think that the biggest reason for this is compatibility: all of the applications that people use work on windows, everyone else uses windows, everyone knows windows. Linux has a huge hurdle to overcome to be able to compete like Windows. Even if Linux may be a superior technology and run more efficiently, and even if it's GUI is comparable to the UI on Windows: it doesn't have the same value has Windows.
When you select windows as your operating system you don't have to worry about training, you don't have to worry as much about application incompatibilities, you don't have to worry if your employees will be able to interact with external parties. But with Linux many or all of these can be potential issues that will require time and money to overcome. I don't think that there are any significant flaws in Linux. It isn't perfect, but Windows isn't either. But even with all of the things that Linux has going for it, it doesn't have the momentum to really beat Windows. At least not yet...
I'm more interested in the desktop side of things. Linux still hasn't made many inroads on desktops. Many techies have dual boot installations with both Windows and Linux, and some ultra cheap computers come with Linux. Some companies and government institutions have converted wholesale to Linux. But most of these case are the exceptions to the rule. Windows is by far the most popular desktop operating system.
I think that the biggest reason for this is compatibility: all of the applications that people use work on windows, everyone else uses windows, everyone knows windows. Linux has a huge hurdle to overcome to be able to compete like Windows. Even if Linux may be a superior technology and run more efficiently, and even if it's GUI is comparable to the UI on Windows: it doesn't have the same value has Windows.
When you select windows as your operating system you don't have to worry about training, you don't have to worry as much about application incompatibilities, you don't have to worry if your employees will be able to interact with external parties. But with Linux many or all of these can be potential issues that will require time and money to overcome. I don't think that there are any significant flaws in Linux. It isn't perfect, but Windows isn't either. But even with all of the things that Linux has going for it, it doesn't have the momentum to really beat Windows. At least not yet...
October 22, 2009
Why are netbooks so popular?
I've been wondering lately why netbooks are becoming so popular. I've always seen these devices as a small niche market between smart phones and real latops. They seem to be smack dab in the middle of the two worlds, with few of the advantages of either. Smart phones are ultra portable - they fit in you pocket. They may lack in processing power and screen real estate, but you can take them everywhere. Laptops are not as portable but you can still take them with you on the go. They can have considerable processing power and good screen size, so they are still very useful computing platforms.
Netbooks seem to fall in the middle. They are small but not small enough to carry in your pocket, seems like you would still need a briefcase or bag to carry them around in. They don't seem much more powerful than a smartphone and have considerably less screen real estate than a laptop.
Maybe I'm just not the target audience. I either want ultimate portability, where I'm willing to sacrifice performance; or I want ultimate power so I can get real work done. But I'm a software developer and a techie, so I probably have different needs than the average Joe.
Netbooks seem to fall in the middle. They are small but not small enough to carry in your pocket, seems like you would still need a briefcase or bag to carry them around in. They don't seem much more powerful than a smartphone and have considerably less screen real estate than a laptop.
Maybe I'm just not the target audience. I either want ultimate portability, where I'm willing to sacrifice performance; or I want ultimate power so I can get real work done. But I'm a software developer and a techie, so I probably have different needs than the average Joe.
October 17, 2009
Code Profiler for .NET
I've always been interested in profiling my C# code. Years ago with .NET 1.1 I used the DevPartner Profiler Community Edition, which is no longer available. Ever since then I have been unable to find a good free or open source solution for profiling .NET code. I know that there are some decent commercial products out there, but I'm cheap and I don't the tools often enough to merit purchasing them.
Is code profiling just not in demand? It seems like if enough people were interested in the value of profiling, then there would be at least one decent open source solution. For me it has been fun on occasion to really dig deep into an algorithm that I'm working on. Trying to eek just a little more performance out of it. I've found that disassembling the code also helps to see what is actually happening behind the scenes. One of the nice things about profiling .NET code is that you don't need to instrument the code manually. When I used the DevPartner ProfilerProfiler I just picked the options I wanted and clicked go. After running the application I could delve into the details of which functions were being hit the most, and even which lines of code were consuming the most time. It can be a challenge to tune the performance of an algorithm or an application, but it can be rewarding to the code double in speed, or even more.
Hopefully I'll be able to find a good open source code profiler to "get my fix" on performance tuning my code.
Is code profiling just not in demand? It seems like if enough people were interested in the value of profiling, then there would be at least one decent open source solution. For me it has been fun on occasion to really dig deep into an algorithm that I'm working on. Trying to eek just a little more performance out of it. I've found that disassembling the code also helps to see what is actually happening behind the scenes. One of the nice things about profiling .NET code is that you don't need to instrument the code manually. When I used the DevPartner ProfilerProfiler I just picked the options I wanted and clicked go. After running the application I could delve into the details of which functions were being hit the most, and even which lines of code were consuming the most time. It can be a challenge to tune the performance of an algorithm or an application, but it can be rewarding to the code double in speed, or even more.
Hopefully I'll be able to find a good open source code profiler to "get my fix" on performance tuning my code.
October 16, 2009
Weird Javascript and AJAX errors
I recently implemented some JavaScript logging on one of the web sites that I work on. Something like this: Using XMLHttpRequest to log JavaScript errors. It is working well and helping me uncover errors in my code, but many of the remaining errors that I see make little or no sense. Here is a list of some of the oddities that I have seen, that have been unable to reproduce in my development environment since that are so rare and sporadic:
- There are times that certain JavaScript functions and variables cannot be found. Many of these are defined in external JavaScript files. In the case of Firefox I see errors when an external file fails to load for some reason. Internet Explorer give no such indication, but I have to assume that the same thing is happening. The Firefox errors do not give any details as to why the file failed to download.
- Sometimes the server side logging gets blank errors. So somehow my logging page gets hit with no data, this shouldn't be happening.
- The most frequent AJAX error is when making an AJAX call, the data returned is incomplete. I know that the readyState property is set to 4 and the status is 200, but looking at the actual length of the data (from the Content-length in the header) and comparing it to the length of the data in responseText, some data is missing. Sometimes it is almost the right size but many times it is only a fraction of the expected size. This is even after taking into account the fact that the data is UTF-8 encoded. The data size can be anywhere from 20 to 30KB, so I have wondered if the amount of data may be a contributing factor.
- The other AJAX error is non-standard status codes. With FireFox I see responses of 0, and Internet Explorer I see the infamous 12000 error codes like 12019 12029 12030 and others.
- I haven't found anything definitive to help when files fail to load, but I am going to enable gzip compression for javascript files on IIS 7 to see if it might be to due to slow connections timing out. Hopefully the smaller file size will help these requests succeed more often, but this is not a complete solution. I expect to continue to see this problem.
- This one has me stumped. The JavaScript logging code should always be passing an error message, with the error data, even the JavaScript error handling has no content. I have no idea why these would come back blank. Maybe the request is timing out, or I have an issue with the server side logging code not waiting until all of the data is ready.
- I haven't had much luck with this one either. So far I have added some retry code, so if I get a failure, I just try again. This appears to work about 75% of the time, but I currently limit it to 1 retry, so I still see some failures. This also seems like a less than ideal solution, but maybe it is the best I can do.
- Same as number 3, I just try the request again, and is succeeds about 75% of the time. The requests are over HTTPS and some information I've found indicate that this might be a problem with Internet Explorer trying to reuse connections and failing, but I have not tried adding the Connection: Close header yet.
October 13, 2009
C# vs. C programming
For quite a few years now I've been working primarily with C# and Microsoft .NET, and I have to admit that even with it's shortcomings I would list them as my preferred programming language and development framework. I will admit that there are still cases where C and C++ are better/faster/etc., but overall I find that when I use C# I'm more productive, my code has fewer errors, and it is easier to maintain.
At my current job I get to work in both worlds. Most of the newer software we write is in C#, but we still have a pretty expansive set of libraries and applications that are in C++, we even have one that is in managed C++ (which has it's own set of problems). I always prefer working on the C# side of things, and even dread working with some of our C++ applications.
I know that for most people this is an almost religious topic, and I don't want to come across as a zealot, I just have my preferences. I've used C++ quite extensively and it is a great language, but C# builds on the long history of C and C++ and adds more than a few nice features. And since it is built on a decent framework (.NET) there is a greater consistency to code. When you change jobs in a c++ environment, you probably have to learn a new set of frameworks. Some companies use in house libraries, some use boost, and others use something else. With C# most of the basic framework pieces come built in. There will always be a need for other frameworks beyond that, but .NET comes with most of the necessities.
There are many other features and helpful things that come with C# and .NET, but my overall view is that when I use them, I am more productive overall, and that is money in the bank to me.
At my current job I get to work in both worlds. Most of the newer software we write is in C#, but we still have a pretty expansive set of libraries and applications that are in C++, we even have one that is in managed C++ (which has it's own set of problems). I always prefer working on the C# side of things, and even dread working with some of our C++ applications.
I know that for most people this is an almost religious topic, and I don't want to come across as a zealot, I just have my preferences. I've used C++ quite extensively and it is a great language, but C# builds on the long history of C and C++ and adds more than a few nice features. And since it is built on a decent framework (.NET) there is a greater consistency to code. When you change jobs in a c++ environment, you probably have to learn a new set of frameworks. Some companies use in house libraries, some use boost, and others use something else. With C# most of the basic framework pieces come built in. There will always be a need for other frameworks beyond that, but .NET comes with most of the necessities.
There are many other features and helpful things that come with C# and .NET, but my overall view is that when I use them, I am more productive overall, and that is money in the bank to me.
October 3, 2009
Agile Software Development
There are many different software development methodologies that are practiced today. One of the popular choices today is Agile Software Development. When I do software development I generally use agile techniques, but I wouldn't consider myself an agile purist. Where I work, we use scrum meetings, and very quick development cycles, and a few other agile ideas. But we don't use every agile technique.
I assume most people and businesses do this, but I try to be familiar with many different methodologies and practices as possible. I try to use the ideas and techniques that best fit the situation at hand. If I'm doing a large scale project, I try to do more work gathering requirements upfront, but if the project is much smaller I may just sit down with the project owner for a quick discussion and start designing and implementing from that.
I know some people are much more religious about this, and the idea of mixing and matching between different methodologies would be heresy, but it really does work. There are times that it is good to be strict and keep with consistent policies and procedures, but there seem to be many more occasions where flexibility is king. Within certain constraints and with a good understanding of software development, it can be very advantageous to be flexible. In days with tight schedules, limited resources, and never ending requirements we must do what we can to thrive and create great software.
I assume most people and businesses do this, but I try to be familiar with many different methodologies and practices as possible. I try to use the ideas and techniques that best fit the situation at hand. If I'm doing a large scale project, I try to do more work gathering requirements upfront, but if the project is much smaller I may just sit down with the project owner for a quick discussion and start designing and implementing from that.
I know some people are much more religious about this, and the idea of mixing and matching between different methodologies would be heresy, but it really does work. There are times that it is good to be strict and keep with consistent policies and procedures, but there seem to be many more occasions where flexibility is king. Within certain constraints and with a good understanding of software development, it can be very advantageous to be flexible. In days with tight schedules, limited resources, and never ending requirements we must do what we can to thrive and create great software.
Labels:
agile,
management,
software design,
software development
October 2, 2009
Death to Internet Explorer 6
I personally think that Internet Explorer 6 should be outlawed. Web development can be difficult enough to make things look good and work right, that throwing Internet Explorer 6.0 into the mix just makes things that much harder. Even when I'm doing ASP .NET development, you would think that everything would work well with IE 6, but that is not the case.
I spend most of my time using Firefox to test my sites, then do some quick checks in either IE 7 or IE 8, depending on what is installed on the computer, and in most cases things look and work pretty well. Sometimes there may be a few tweaks necessary to get things just right. After that I have to spin up a virtual machine, or find an old computer with IE 6 on it. And that is where the fun begins.
Web page layouts never quite look right, IE 6 never really seems to do what you've told it to. It selectively ignores CSS and re-sizes things how it wants. The internet is abound with IE 6 CSS hacks. Functionality seems to have just as many problems. Basic JavaScript is hit or miss, it might work just fine, or it might decide to be your worst enemy. Anything more complex like AJAX is almost a lost cause. You might as well develop and maintain two separate websites: one for real web browsers and another for IE 6.
Maybe I'm being a little hard on the browser, but it really is a web developers worst nightmare. If there were only a few computers out there that still had IE 6, that would be one thing. But there is still a large portion of computers that run IE 6 as their primary browser. I defintely favor Firefox, but I don't mind if people want to use IE 7 or IE 8, just not IE 6. We should all wish it a fond farewell, and retire the old chap already.
I spend most of my time using Firefox to test my sites, then do some quick checks in either IE 7 or IE 8, depending on what is installed on the computer, and in most cases things look and work pretty well. Sometimes there may be a few tweaks necessary to get things just right. After that I have to spin up a virtual machine, or find an old computer with IE 6 on it. And that is where the fun begins.
Web page layouts never quite look right, IE 6 never really seems to do what you've told it to. It selectively ignores CSS and re-sizes things how it wants. The internet is abound with IE 6 CSS hacks. Functionality seems to have just as many problems. Basic JavaScript is hit or miss, it might work just fine, or it might decide to be your worst enemy. Anything more complex like AJAX is almost a lost cause. You might as well develop and maintain two separate websites: one for real web browsers and another for IE 6.
Maybe I'm being a little hard on the browser, but it really is a web developers worst nightmare. If there were only a few computers out there that still had IE 6, that would be one thing. But there is still a large portion of computers that run IE 6 as their primary browser. I defintely favor Firefox, but I don't mind if people want to use IE 7 or IE 8, just not IE 6. We should all wish it a fond farewell, and retire the old chap already.
Labels:
ASP .NET,
Internet Explorer,
software development,
Windows
October 1, 2009
.NET Code Coverage
I've been looking for a good, free code coverage tool for .NET for quite a while. I know that years ago NCover used to be pretty good, but the open source version appears to be dead, replaced by a commercial version. The old version still exists and works, but it's pretty outdated. Recently I've found PartCover but haven't had a chance to thoroughly try it out. Beyond that, I haven't been able able to find anything else that is open source or even free. But neither of these two solutions appear to have lots of active development going on, which I consider a pretty important metric when looking at adopting an open source tool or framework.
It seems surprising that their isn't more activity in the open source world in this area. There seems to be many other active communities in the open source world about C# and .NET. There are many projects like NUnit and NHibernate that are actively developed and extremely helpful. But there doesn't seem to be much open source activity about code coverage. Is this because people find that the commercial options available work well at a reasonable price? Or do people just not put much importance on code coverage?
I think that code coverage receives less attention than many other software development practices like unit testing, but it still seems like it would get more focus that it currently does. I hope that there is a code coverage tool out there that I just can't find, but I'm not holding my breath.
It seems surprising that their isn't more activity in the open source world in this area. There seems to be many other active communities in the open source world about C# and .NET. There are many projects like NUnit and NHibernate that are actively developed and extremely helpful. But there doesn't seem to be much open source activity about code coverage. Is this because people find that the commercial options available work well at a reasonable price? Or do people just not put much importance on code coverage?
I think that code coverage receives less attention than many other software development practices like unit testing, but it still seems like it would get more focus that it currently does. I hope that there is a code coverage tool out there that I just can't find, but I'm not holding my breath.
Labels:
ASP .NET,
code coverage,
open source,
software development
September 30, 2009
The Next Big Thing
I have to admit that I am constantly trying to think of The Next Big Thing: the next web fad, the next big technology, etc. I'd love to come up with the next Facebook or Google. Software is my passion, and I'm always trying to thing of new ways to use software to help people and/or make money. Most of my ideas never pan out, but it doesn't stop me from thinking of new ones.
I'll admit that it is a little optimistic to think that I'll come up with the Facebook or Google, but it would be so cool if I did. All I need to think of/invent/find out/predict the next big thing on the web. I find that it is easy to come up with ideas, but most of them are slight twists on things that already exist. Trying to come up with something truely unique and something that people want is not so easy.
Of course, part of what drives me is the thought of a potential revenue stream, it would be great to come up with something that generates income. But that is not my only draw. I just enjoy writing software, and if I can make a little extra money doing it, then even better. The last aspect is nerd cred. How cool would it be to be the next Mark Zuckerberg or Larry Paige.
Hopefully I'll come up with The Next Big Thing soon, but it's taking a lot of work.
I'll admit that it is a little optimistic to think that I'll come up with the Facebook or Google, but it would be so cool if I did. All I need to think of/invent/find out/predict the next big thing on the web. I find that it is easy to come up with ideas, but most of them are slight twists on things that already exist. Trying to come up with something truely unique and something that people want is not so easy.
Of course, part of what drives me is the thought of a potential revenue stream, it would be great to come up with something that generates income. But that is not my only draw. I just enjoy writing software, and if I can make a little extra money doing it, then even better. The last aspect is nerd cred. How cool would it be to be the next Mark Zuckerberg or Larry Paige.
Hopefully I'll come up with The Next Big Thing soon, but it's taking a lot of work.
September 29, 2009
Software Developer Job Interviews
At my job I have the interesting job of interviewing potential software developers. First of all it's much better to be on the interviewer side of the equation, but even then I see some pretty interesting things.
Some candidates that I've seen just seem so far off the mark, that it isn't even funny. I know that software developers have some very strong (and generally accurate) stereotypes about them, but sometimes I just have to laugh. I assume that when people are interviewing, that they put their best face forward. So when I see someone that is severely lacking in communication skills or that can't seem to think through a basic story problem I get concerned.
For hiring software developers, I generally look for two main things. First is technical experience and knowledge. The candidates I hire need to have a respectable 4 year degree in Computer Science and/or years of on the job experience. Of course a lot of this depends on the level of candidate I'm looking for, but there has to be something. Much of this technical background can be gleaned from a resume, unlike the second thing I look for: communication skills.
Like I said before, I know that there are many well deserved stereotypes about software developers, and one of them is lack of communication skills. But even between developers there can be a huge difference in this area. Some people that I have interviewed (and sometimes worked with) can't even communicate well with other techies. They speak in a language that is all their own, that others can barely understand.
If someone passes the smoke test for both of these items after a brief phone interview, then I will generally consider a face-to-face interview. One thing that I like to use during an interview is word problems or Puzzles. These help me to get some insight into how people think, communicate, and even respond to pressure. And it's not all about getting the correct answer, although that helps.
Some other resources that I have found to be useful both as an interviewer and interviewee is Programming Interviews Exposed. It isn't a magic bullet, but it has helped me as an interviewer to get a different view on things. Some of the ideas they present are a little extreme, but still provide some insight.
Some candidates that I've seen just seem so far off the mark, that it isn't even funny. I know that software developers have some very strong (and generally accurate) stereotypes about them, but sometimes I just have to laugh. I assume that when people are interviewing, that they put their best face forward. So when I see someone that is severely lacking in communication skills or that can't seem to think through a basic story problem I get concerned.
For hiring software developers, I generally look for two main things. First is technical experience and knowledge. The candidates I hire need to have a respectable 4 year degree in Computer Science and/or years of on the job experience. Of course a lot of this depends on the level of candidate I'm looking for, but there has to be something. Much of this technical background can be gleaned from a resume, unlike the second thing I look for: communication skills.
Like I said before, I know that there are many well deserved stereotypes about software developers, and one of them is lack of communication skills. But even between developers there can be a huge difference in this area. Some people that I have interviewed (and sometimes worked with) can't even communicate well with other techies. They speak in a language that is all their own, that others can barely understand.
If someone passes the smoke test for both of these items after a brief phone interview, then I will generally consider a face-to-face interview. One thing that I like to use during an interview is word problems or Puzzles. These help me to get some insight into how people think, communicate, and even respond to pressure. And it's not all about getting the correct answer, although that helps.
Some other resources that I have found to be useful both as an interviewer and interviewee is Programming Interviews Exposed. It isn't a magic bullet, but it has helped me as an interviewer to get a different view on things. Some of the ideas they present are a little extreme, but still provide some insight.
September 28, 2009
The Future of Computer Software Development
What does the future of software development hold? Even over the last few years I can think back to how much more difficult the day to day tasks of software development were. Writing a Java GUI application was painful, using source control was difficult, testing code was error prone. All of these things and many more tasks seemed to take up a lot of my time.
To be honest not all of these things have gotten easier. But many great tools and methodologies have come along that make many of the tasks we perform each day much easier. They allow me to spend less time on the mundane tasks so that I can focus on what I enjoy: developing software. I find that more and more of my day can be spent on system architecture, software design, and actual coding.
Some of the specifics that I think of are: Visual Studio and Subversion. Visual Studio can be a pain to learn to use well. There is a steep learning curve, but I find myself somewhat dependent on the tools and help that it gives me. I know some people will think that this is a bad thing, but overall I find that it makes me more productive. Subversion makes my day so much easier. Most of the other source control systems I have used are not very smart, and not very easy to use.
At the heart of things I still know that software development is not easy. Regardless of the tools at hand, more powerful languages, any anything else that has come along, it still takes skilled engineers with solid development practices and procedures to write good software. But will this change in the future? Will some new tool or language some along that is so revolutionary that software development will be easy? I don't believe so.
Some tasks might continue to get easier, but in the end you will always need experienced and educated software developers to do the work.
To be honest not all of these things have gotten easier. But many great tools and methodologies have come along that make many of the tasks we perform each day much easier. They allow me to spend less time on the mundane tasks so that I can focus on what I enjoy: developing software. I find that more and more of my day can be spent on system architecture, software design, and actual coding.
Some of the specifics that I think of are: Visual Studio and Subversion. Visual Studio can be a pain to learn to use well. There is a steep learning curve, but I find myself somewhat dependent on the tools and help that it gives me. I know some people will think that this is a bad thing, but overall I find that it makes me more productive. Subversion makes my day so much easier. Most of the other source control systems I have used are not very smart, and not very easy to use.
At the heart of things I still know that software development is not easy. Regardless of the tools at hand, more powerful languages, any anything else that has come along, it still takes skilled engineers with solid development practices and procedures to write good software. But will this change in the future? Will some new tool or language some along that is so revolutionary that software development will be easy? I don't believe so.
Some tasks might continue to get easier, but in the end you will always need experienced and educated software developers to do the work.
September 24, 2009
Sporadic ASP .NET AJAX errors
I've got a website that is having very sporadic ASP.NET AJAX errors. I have a JavaScript logging solution that reports back to me when any JavaScript errors occur on my web pages. There are two errors that occur most frequently are ['Type not defined] and ['Sys' not defined]. These both are objects defined by the .NET AJAX extensions. It appears that in some cases (probably 1% or less) these types fail to initialize. The issue occurs on Internet Explorer 6, 7, and 8. The only way I have been able to reproduce this error is forcing the first ASP .NET JavaScript file to not load. Should I just chalk this up to sporadic internet hiccups? Most of the issues related to this that I've seen in my internet searches are errors that occur constantly because of misconfiguration, but my site works almost all of the time and only has issues occasionally.
The other odd error is that I have is AJAX calls back to our server that fail to return all of the data. The HTTP header indicates one size for the data, and the actual text returned is considerably smaller, anywhere from only 10% to half, and even a few that returned 0 bytes. And this is after accounting for the fact that the data is UTF8 encoded. I have always assumed that if the AJAX call returns, and readyState property is 4 and the status is 200 then the data being returned should be correct, accurate, and complete. This error seems to be across all Internet Explorer versions and I've even seen it with Firefox. Has anyone seen any issues similar to either of these, or have any suggestions as to how to debug this?
The other odd error is that I have is AJAX calls back to our server that fail to return all of the data. The HTTP header indicates one size for the data, and the actual text returned is considerably smaller, anywhere from only 10% to half, and even a few that returned 0 bytes. And this is after accounting for the fact that the data is UTF8 encoded. I have always assumed that if the AJAX call returns, and readyState property is 4 and the status is 200 then the data being returned should be correct, accurate, and complete. This error seems to be across all Internet Explorer versions and I've even seen it with Firefox. Has anyone seen any issues similar to either of these, or have any suggestions as to how to debug this?
September 3, 2009
Service Oriented Architecture - SOA
One of the great buzz words we all see is SOA. Every person I talk to, describes it differently. Everyone thinks of different pros and cons when ever they extol the virtues of SOA. Some people always talk in terms of Web Services and SOAP, other always refer to service buses and queues. How can we concisely describe "What is SOA?"
- The design and architecture of the system must include services. Doesn't matter if these are web services, services subscribed to a message bus, or something else. They just have to be services.
- The services should push your design to be more modular, flexible, and not tightly coupled. Otherwise you might just be missing the point.
- When possible services should be as generic as possible, so they can be leveraged as much as possible. Specialized services are a must, but they should only be used when necessary.
- Not everything should be a service. If you find yourself trying to force something into a service, your probably trying to hard. If it doesn't naturally fit as a service, don't push it
June 2, 2009
Windows 7 - free upgrade to vista
About 6 months ago I built myself a computer, and I decided to install Vista on it. The primary reason for this choice (And not Windows XP) was that the computer had 8GB of memory so I 64bit OS was necessary, and in my mind 64bit XP is not a viable option. So I've been using Vista for a while (I went with Ultimate so I could get Remote Desktop, etc) and it seems to be OK. Nothing new and exciting about it, but at least it works (With a few little tweaks).
But now that Windows 7 is on the horizon, it seems to me that it should be a free upgrade (like a service pack) to Vista, not a brand new operating system. Why should I have to pay for the "fixed" version of the operating system that I already own. Doesn't seem to make much sense to me. But I guess microsoft has to make their money somewhere.
But now that Windows 7 is on the horizon, it seems to me that it should be a free upgrade (like a service pack) to Vista, not a brand new operating system. Why should I have to pay for the "fixed" version of the operating system that I already own. Doesn't seem to make much sense to me. But I guess microsoft has to make their money somewhere.
May 27, 2009
Crowd Sourcing
I've been thinking about Crowd sourcing lately (Like Amazon's Mechanical Turk). Seems like an interesting way to deal with tasks that are not easy problems to solve with computers. I know that there are other sites out there, with different models, but still the same basic idea.
From a worker perspective it seems like many of the tasks are VERY low paid, which lends itself to people quickly loosing interest, and therefor not having a very reliable base of workers. So in my mind it seems like most of the current models will fail to be successful in the long run. I've been wondering what it would take to make a thriving community of both workers and requestors that would be fair and equitable for both. The key seems to be pricing, because of the low pricing, it seems that the lowest common denominator of people do the work. By providing a better way to determine costs and payments, you could create a much more equitable system that benefits everyone. Workers get more work that pays more, and requestors get a better quality of workers to complete their tasks. The question is, how does this happen?
It seems that instead of Requestors selecting the price, there needs to be some way of determining cost based on time spent, skillset needed and/or user feedback. There might also be a voting or bidding system that could fulfill some of the same purposes. I'm still trying to come up with a concrete idea . . . we'll see what happens.
From a worker perspective it seems like many of the tasks are VERY low paid, which lends itself to people quickly loosing interest, and therefor not having a very reliable base of workers. So in my mind it seems like most of the current models will fail to be successful in the long run. I've been wondering what it would take to make a thriving community of both workers and requestors that would be fair and equitable for both. The key seems to be pricing, because of the low pricing, it seems that the lowest common denominator of people do the work. By providing a better way to determine costs and payments, you could create a much more equitable system that benefits everyone. Workers get more work that pays more, and requestors get a better quality of workers to complete their tasks. The question is, how does this happen?
It seems that instead of Requestors selecting the price, there needs to be some way of determining cost based on time spent, skillset needed and/or user feedback. There might also be a voting or bidding system that could fulfill some of the same purposes. I'm still trying to come up with a concrete idea . . . we'll see what happens.
April 23, 2009
Graph Visualization/Interaction
I'm been trying to find a good open source C#/.NET library for graph visualization and manipulation (Not charts/graphs, but graphs as in Computer Science graph theory/nodes/edges.) I have come up pretty empty handed. I'm trying to find a way to visualize and interact with a state machine library that I have. So I just need basic nodes and edges, both with labels and color, and automatic layout algorithms. I've looked and looked and come up empty handed, the best I could find were:
P.S. Just in case anyone is wondering, I did consider using Windows Workflow Foundation for our state machines, but they seemed way to heavyweight, and since it sounds like they are rearchitecting the system for version 4.0 (To finally get it right hopefully?). I decided to steer clear.
- QuickGraph, which didn't seem to have any useful interactive capabilities, and depended on other libraries/code for displaying graphs.
- NodeXL which seemed to be primarily excel based (who does that?), but did have a library component, but didn't seem to be able to even label edges of graphs.
- Netron which seems to have existed once as open source, but is now something else (The website is very unclear about what/if they are selling), there is an older version of the open source code that has lots of components, but it doesn't appear to be geared toward library use.
- Piccolo2D seemed promising, but also seemed to not support labeling edges, and also seemed more oriented towards alternate UI design than simple graph display and manipulation.
P.S. Just in case anyone is wondering, I did consider using Windows Workflow Foundation for our state machines, but they seemed way to heavyweight, and since it sounds like they are rearchitecting the system for version 4.0 (To finally get it right hopefully?). I decided to steer clear.
April 14, 2009
Distributed Computing
For years now I have been running Prime95 on some of my computers. (I'm 256 on the top producers list right now). But I've decided to try branching out. I know SETI@home is one of the older projects and I decided to give it a try. I'm only a few days into it and I only have about 7,000 credits on my account. It is cool that SETI@home has a client application that is able to use the GPU. So on my quad core system, I can have 5 concurrent tasks (1 GPC, and 4 CPU).
Distributed computing has always seemed very interesting to me as a software developer. Unfortunately there are only certain projects and applications that lend themselves to this kind of architecture. Most applications are tightly coupled and even if they can be "distributed", it's likely that the best you'll get is multiple CPU's on the same host computer. Some tasks can be distributed across multiple computers that are co-located and/or have high speed interconnects like most super computers these days. But even many of these tasks require that the systems are the same hardware/OS/etc, and thinking back to my college class on parallel processing, they can be hard to write well (so they run efficiently and correctly). There really are relatively few projects that fit this mold of being able to be widely distributed across different machines that only communicate on an infrequent basis. But it is a really cool concept, that seems to be gaining more traction in the commercial world as well as these kinds of collaborative volunteer projects.
By the way, the best sites I've seen so far for monitoring stats are BOINCstats and Free-DC. They both seem to have a decent set of graphs and data to analyze my progress. But the other stats sites have some cool stuff as well (Stats 'N Stones, Synergy, The Knights, all Project Stats, Combined Statistics). If anyone else has any favorite projects let me know. I probably try branching out in the next few weeks.
UPDATE: I decided to join Einstein@Home as well, but I'm still open to other project suggestions.
Distributed computing has always seemed very interesting to me as a software developer. Unfortunately there are only certain projects and applications that lend themselves to this kind of architecture. Most applications are tightly coupled and even if they can be "distributed", it's likely that the best you'll get is multiple CPU's on the same host computer. Some tasks can be distributed across multiple computers that are co-located and/or have high speed interconnects like most super computers these days. But even many of these tasks require that the systems are the same hardware/OS/etc, and thinking back to my college class on parallel processing, they can be hard to write well (so they run efficiently and correctly). There really are relatively few projects that fit this mold of being able to be widely distributed across different machines that only communicate on an infrequent basis. But it is a really cool concept, that seems to be gaining more traction in the commercial world as well as these kinds of collaborative volunteer projects.
By the way, the best sites I've seen so far for monitoring stats are BOINCstats and Free-DC. They both seem to have a decent set of graphs and data to analyze my progress. But the other stats sites have some cool stuff as well (Stats 'N Stones, Synergy, The Knights, all Project Stats, Combined Statistics). If anyone else has any favorite projects let me know. I probably try branching out in the next few weeks.
UPDATE: I decided to join Einstein@Home as well, but I'm still open to other project suggestions.
February 26, 2009
UML and struggles with Visio
I've started working on a new project and were trying to design everything properly from the ground up. I've got a rough high level system architecture in place, and now I'm trying to do more detailed design of the individual components of the system. I haven't used formal UML much in the past, so I'm trying to force myself to learn and use it for my class documentation. After some rough sketches on paper and whiteboards. I have tried using Visio to create the diagrams. This has not gone well. Visio seems to force a very strict form of UML, if there is some sort of informal notes, or attributes I want to add, it is almost impossible.
Also it seems that everything you add to the diagrams has to be done through the properties dialog boxes. Why can't I just get a box with three sections, and type what I want. That would seem to be so much easier. Maybe the other method formalizes the UML (and maybe it could be used for generating classes?) but it seems like way to much overhead.
I guess I need to look into other ways to do this with Visio, or even look at some other tools for UML. Maybe a good open source tool?
Also it seems that everything you add to the diagrams has to be done through the properties dialog boxes. Why can't I just get a box with three sections, and type what I want. That would seem to be so much easier. Maybe the other method formalizes the UML (and maybe it could be used for generating classes?) but it seems like way to much overhead.
I guess I need to look into other ways to do this with Visio, or even look at some other tools for UML. Maybe a good open source tool?
February 6, 2009
Enterprise level Report designer and generator
I'm looking for a reliable and full features report designer and generator (like Crystal Reports.) Open source or free would be nice, but I'm open to commercial applications as well. My short list of requirements are:
- Works with MySQL
- Charting/Graphing (Hopefully with something like excels PivotTable feature)
- User authentication and authorization that ideally integrates with Active Directory
February 4, 2009
Silverlight with ActiveMQ and NHibernate
I'm getting ready to start on a new application that I want to write using Silverlight. Current I use ActiveMQ as a messaging bus between applications and services. I would like to do the same for Silverlight, and avoid WCF, so that I only have one service architecture to maintain. I have not seen much on using thewe to technologies together. The best I have found is a few messages about compiling NMS against a beta version of the Silverlight runtime. Appears that things almost work, and may work now. Has any one seen anything newer or something actually working?
I also use NHibernate as an ORM solution. Same issue, trying to find people that have been able to sucessfully use these 2 technologies together. Once again, not a whole lot of reliable looking things. Some rumblings about LINQ, which seems to indicate that it would work. But nothing obvious.
I also use NHibernate as an ORM solution. Same issue, trying to find people that have been able to sucessfully use these 2 technologies together. Once again, not a whole lot of reliable looking things. Some rumblings about LINQ, which seems to indicate that it would work. But nothing obvious.
Labels:
ActiveMQ,
C#,
NHibernate,
Silverlight,
software development
January 29, 2009
Vista Remote Desktop and monitor power saving
For some reason when you use remote desktop to access a Vista machine, it seems to turn off the power saving features. Normally my monitor shuts off after 10 minutes, but if I remote desktop into the machine, the monitor never seems to turn off, until I log in locally. Once I log back out, the monitor shuts off after the normal time. I've seen a few posts about this, but no one seems to have an answer or solution. Please chime in if someone knows how to fix this.
VMware and Avira
I was trying to get VMware Server running on my Windows Vista 64 machine, and every time I tried to power up a new VM, it would get stuck at 95%. I finally stumbled on a post that indicates that VMware may not play nice with Avira Antivirus. It says you have to uninstall Avira for it to work. I was hoping to get by with just turning off Avira and even shutting down the service, still no luck. It seems odd that this is still an issue, but I haven't seen any newer posts about this. I guess I may have to switch Antivirus software to get VMWare to work?
January 28, 2009
NHibernate
Lately I've been working on a new data access layer. The new project is for some C# code that I work on, so we decided to use NHibernate. I've been doing some research and playing with NHibernate. First thing is that the actual NHibernate site is not the best place to go for up to date documentation. I'm using NHibernate 2.0, but most of the documentation is specific to 1.0, and doesn't always work in 2.0. After some searching (more than was necessary, since I would expect something like this to be easy to find), I found another site: NHibernate Forge. It has content that is more updated, especially the main reference documenation.
So far everything seems to work pretty well. A few little things I have stumbled on:
NHibernate has great logging, but it can be overkill as times. I decided to limit the log4net appenders, so that the NHibernate logs didn't overwhelm the rest of the logging. Since I use multiple different apps that all use the same configuration for log4net, I setup the logging programatically, instead of trying to maintain identical logging config files for each. This can make it a little harder to set up loggers, but I stumbled on how to do this as well.
I was also trying to look into NHibernate transactions, and the possibility of nested transactions using savepoints. I use these in some of my other code to deal with certain error cases where some things need to be rolled back, but not an entire transaction. It seems that this doesn't really work well with NHibernate. Because of the tight coupling between the data objects in memory and the database, rolling back part of a transaction could leave the in memory objects in an inconsistent state. Fortunately in most cases this will be OK, but it is nice to be able to use the savepoints in some cases.
I think I've worked out most of the details for what I need, I'm just trying to create some lightweight wrappers around NHibernate, to give me an abstraction layer between NHibernate and my code. Hopefully everything will be smooth sailing from here on out.
So far everything seems to work pretty well. A few little things I have stumbled on:
NHibernate has great logging, but it can be overkill as times. I decided to limit the log4net appenders, so that the NHibernate logs didn't overwhelm the rest of the logging. Since I use multiple different apps that all use the same configuration for log4net, I setup the logging programatically, instead of trying to maintain identical logging config files for each. This can make it a little harder to set up loggers, but I stumbled on how to do this as well.
I was also trying to look into NHibernate transactions, and the possibility of nested transactions using savepoints. I use these in some of my other code to deal with certain error cases where some things need to be rolled back, but not an entire transaction. It seems that this doesn't really work well with NHibernate. Because of the tight coupling between the data objects in memory and the database, rolling back part of a transaction could leave the in memory objects in an inconsistent state. Fortunately in most cases this will be OK, but it is nice to be able to use the savepoints in some cases.
I think I've worked out most of the details for what I need, I'm just trying to create some lightweight wrappers around NHibernate, to give me an abstraction layer between NHibernate and my code. Hopefully everything will be smooth sailing from here on out.
Subscribe to:
Posts (Atom)