• warning: include(/tmp/fortune.txt): failed to open stream: No such file or directory in /home/mohawksoft/org/www/htdocs/includes/common.inc(1696) : eval()'d code on line 1.
  • warning: include(): Failed opening '/tmp/fortune.txt' for inclusion (include_path='.:/usr/share/php:/usr/share/pear') in /home/mohawksoft/org/www/htdocs/includes/common.inc(1696) : eval()'d code on line 1.

Software Engineering vs Programming

I am often at a loss for words when it comes to describing about what is right or wrong about a project. Throughout my career I have come into projects at many different levels, from engineer to CTO, and have seen projects that I think are good vs those that I think are bad. In my opinion it comes down to the difference between architecture and programming. A brilliant programmer can create a great and wonderful program.

What is C++ Code Anyway?

Often times I see people criticize source code written for C++ as not "C++" code. Somehow, even though constructs like printf(..), malloc(), and so on are supported perfectly well in C++, others call in to question their use.

How Big does Big Need to Be?

I was running some numbers about scalability recently, and they provide some perspective. Lets talk about a fairly busy website. 1 million hits a day. If evenly distributed, that's 12 hits per second. If you estimate the peek to be about 4 times the average, then you can pretty much plan for about 50 hits a second.

Maybe God Does Play Dice - Hash Based Data Deduplication

Albert Einstein said of quantum mechanics: “I, at any rate, am convinced he does not throw dice” (he meaning god) often paraphrased as “God does not play dice.”This is probably a vision of one of the most profound scientific discontinuities of all time. Just as Newtonian physics and geometry was broken by Einstein's theory of relativity. Einstein's view of the predictable universe was shattered by quantum mechanics.

This is pretty heavy stuff for a computer technology essay, but there is a point. One of the more interesting things to happen in the data storage technology in the last few years is the technique of data block deduplication. This is a process by which computer files are analyzed for blocks, (fixed length quantities of data) that are the same as other blocks within the file or in other files. Then, as blocks are found to be duplicative, only a single block is stored with the other identical blocks referencing the first one. It seems pretty obvious, right?

Syndicate content