I must be a complete loser, because I can't see where Ruby is such hot shit. I'd love to read a story, "What you're not getting about Ruby and why its the tits."Well, my eloquent cries for help were heard. Scott Hanselman posted what he thought I was looking for: Programmer Intent or what you're not getting and why its the tits.
Scott's post is good, but I'm afraid that it doesn't exactly answer my question. He does answer two different questions, both of which are entirely valid: Is it important to learn new programming languages, and is expressiveness in a programming language important?
The first question is an obvious yes. In our profession, technology is always advancing. It cannot hurt to explore new ways of doing things. In fact, if you don't, you'd better find a new job 'cause yours will be obscelete (and deprecated) within five years. So I don't think there is anything wrong with expanding your horizons. As long as balls don't touch.
The second question I'd answer with a maybe. Expressiveness in a language may seem like a good thing at first, and Ruby is definitely expressive, but there are some hidden costs that need to be accounted for. Almost every post I've read about Ruby by its fanboys is about its "Syntactic Sugar", or expressiveness. I've commented on the sugar of Ruby before here and in other places. While it does allow you to do cool stuff that is pretty clear in intent such as:
you_are_fubar if @you.screwed? && @you.lackey?is it worth the costs that come with such expressiveness? With Ruby, those costs seem to be the speed penalty and harder maintainability.
Ruby on Rails, being an interpreted language, incurs a speed penalty during execution. This is because the program is interpreted, which means the execution engine of Ruby must read high level code and translate it into machine instructions at the time of execution, every time it is executed. This is in contrast to compiled languages, which are interpreted once and translated into a low (or lower level) machine language which executes much faster. Of course, this depends on the particular implementation of Ruby you're using. I'm definitely no expert on the subject, but I believe there is no reason why this must be true. Certainly the various Rails implementations in the .NET platform will be compiled into IL like any other language run on the CLR. But until Ruby is compiled this fact is going to be the turd in the punchbowl for developers seeking to make scalable web applications.
The second cost is that Ruby appears to be harder to maintain due to the complexity of the language. K, I'm stretching on this one a little; I have to add the disclaimer that I have not used RoR to create a web app, nor have I attempted to maintain a RoR app written by another developer. I'm probably talking out of my ass, but the issue of maintainability has been concerning me, as a .NET developer, for awhile now. Microsoft is throwing a lot of new shit into the .NET platform, if you haven't noticed. While some of it is awesome (LINQ), other bits are friggin scary (lambda expressions, anonymous types). I dread coming behind a developer who has written his entire application using lambdas and anon types. While you can certainly do some amazing stuff with them and reduce cyclomatic complexity, at the same time your code actually becomes less expressive. Writing code that can be easily read and understood by others is almost as important as writing code that does what it is designed to do without error.. This becomes harder as the complexity of a language increases. By complexity, I mean how many different ways can you do the same thing. And, it appears to me at least, Ruby is one of those languages that is chock full of complexity. Again, its not necessarily a bad thing to have the option to perform a task ten different ways, but when you use each one of them in your program you've made the task of maintaining it exponentially harder.
Is it worth it? I still don't know. Rail's bandwagon is nice and shiny and looks like its chock-a-block full of hot chicks. But you know how some chicks look hot from a distance, but when you get up close to them you see that they're busted. Kinda like Lindsay Lohan at 3am on a Saturday. I get the feeling that a lot of people are tagging it just so they can tell their friends they did afterwards...
(Protip: Check slide 17. Also, MEMCACHE.)
A real simple idea, yet very useful. I'll definitely be using this in the future.
There's only one problem with it, which is that the code they supply to embed the player doesn't work for me. Flash players can be embedded by using an EMBED tag or an OBJECT tag. Embedded objects don't load correctly on my work machine and laptop. Not sure what the issue is. But if you want to switch from object to embed, use this template:
<embed name="movie" pluginspage="http://www.macromedia.com/go/getflashplayer" src="[MOVIE URL]" width="450" height="370" type="application/x-shockwave-flash"/>
All you have to do is pull the movie URL out of the object tag and place it in the marked attribute. You can, of course, change the size of the player if you want. Small text in the slides is pretty hard to view, so you might want to bump it up as wide as your site's formatting allows. Alternatively, you can click on the "on SlideShare" logo to go to the SlideShare website where you can view it full screen. Coolness.
K, its not that I know the guy. 'Cause I've only seen him a couple times. A lot of people are talking him up, saying how he's cool and fast and sweet. But I just don't get it. Seriously, from what I've seen, he just looks like another dork.
Anthropomorphism aside, I have yet to see anything about Ruby on Rails that makes me do anything other than yawn. I haven't spent much time on it other than reading people's glowing reviews. I have seen some code examples, which have done more to turn me off Ruby more than anything. That and the fact that Ruby scales for shit due to the fact that it's a purely interpreted language.
This morning I just read a post over at FrankFi's blog "FrankFi's view of the world" about how Ruby differentiates between functions and variables. I won't repost the code here--his entry is short and sweet so go over and read it. What I will say is that the Ruby language doesn't require that functions without parameters be terminated with empty brackets--(). So the language can't tell the difference between a variable and a function without parameters.
This is fail. Sorry, lovers of Ruby, you can't say that this won't result in freakish, hard to repro bugs that only show up on production servers. And, if a developer was stupid enough to specifically use this "feautre", it would also result in hard to maintain, if not near impossible to maintain, code.
This behavior is another black mark on Ruby in my books. But still, I'm not saying that Ruby on Rails is 100% shit. I'm just saying that if I'm ever going to be convinced that it isn't, somebody is gonna have to write an article called "What you're not getting about Ruby and why its the tits."
Just a funny thing I came across while desperately trying to avoid working on some busted ass queries: Kitty authentication.
What is Kitty Authentication? Its a form of captcha that, instead of using funked text, uses images of cats and dogs and asks the user to discriminate between the two. Its an example of complex pattern matching, something that is easy for humans and horribly difficult for computer programs.
The idea is that the limitations of standard captchas ensure that they will be useless within the next few years. These limitations are nonexistant when asking users to match complex patterns. Current captchas suffer primarily from two limitations. First, there is a limit as to how obscured an image can be. In order to make text-based captcha work, you have to severely beat the fuck outta it. However, if you maul the text too much, people can't figure out what the text says. Because of this, it is pretty much guaranteed that OCR technologies will eventually catch and surpass humans in identifying obscured text. That's because OCR technologies are geared twoards distinguishing text within a badly scanned image. Second, the number of possible characters that make up a captcha is relatively low (most captchas only use the 26 letters of the alphabet; some add 2-9 as well). This means that there is a very limited set of possibilities of which a captcha character must be a member, thus reducing the complexity of identifying what a captcha is. Break a captcha image down into individual letters and it almost becomes a trivial task to crack.
By asking people to differentiate between two different, yet very similar, types of items, both of these limitations are avoided. Firstly, there is no need to obscure the image, making it easy for humans to identify. It is very simple for humans to look at a picture of a cat and a dog and tell the difference; it is virtually impossible, however, for a computer to do this reliably. Second, instead of there being a limited set of items to choose from, there is almost no theoretical limit to the number of pictures you can choose from. You could, like in the examples here, ask users to choose the difference between cats and dogs. Or, you could ask people to choose vegetables from among pictures of fruit. The choices are virtually limitless.
The single real limitation to this type of authentication is that there is a practical limit to the number of pictures you can use. This means computers could be trained to identify each image and what that image holds. The simple way around this would be to obfuscate the image. But doing that brings you back to the problem with captchas where you end up making them unidentifiable to humans.
In order to get around this issue, a group at Microsoft Research has partnered with petfinder.com in order to supply the images. Since Pet Finder has a gigantic (over a million) database of images of cats and dogs that are constantly being replaced, you avoid the issue of computers remembering a particular image and learning what it contains. Its a sweet idea. Not only do you get your captcha, but Pet Finder also gets free advertising. Each picture has a link to that animal's page on their site. So while you're answering a captcha challenge, you might also be adopting a pet!
Via Terry Zink's Anti-Spam blarg.
The reason for leaving the market is that handheld sales have been pretty damn pitiful for the last year. Q1 sales are down an average of 30% across all manufacturers relative to Q1 sales in 2006.
Its pretty obvious that the rise of advanced cellphones has completely crushed the PPC market. You pretty much have to carry a phone. Carrying a PPC is a convenience for most people. And now that you can purchase a phone that either provides the functionality that you need your PPC for or is in fact a PPC itself (like a smartphone or soon the iPhone), why would you want to carry both?
I carried a PPC when I went back to complete my bachelor's. During the two years I had one, I carried an Axim. The first model I had was the X5, and after going through two of those I carried an X30. I still have that one, but the screen died on it a while ago.
The X5 was a chunky monster, but so were all of the PPCs back when it came out. Dell was just one of the boy's in the market. It was when they released the X3 that they started to pull ahead. My X30 definitely rocked. It was slim, fast, had wifi and bluetooth built in, and had a Secure Digital memory slot. I could take class notes on it, cruise the interweb, and play games to burn some time between classes.
But those days have come to an end. Dell's out of the market, which will give the other players some breathing space. But unless they start developing handhelds with cellphone capabilities, they will either go the way of Dell or start catering to specialty markets.
Another disappointing thing about Dell's leaving the market is that they aren't dumping their current stock of Axim handhelds! I think they have sold all their current stock to other companies for sale. You can find X50/X51s from 2nd party vendors, but not at Dell's site. They're still selling Palms, but no Axims. I did a cursory check and I can't even find any refurb'd Axims at Dell's outlet store. Even though Dell is out of the market, X51's are still going for $500. If they were selling their remaining stock for 50% off I'd snap one up with the quickness. RIP, Axim.
Sigh. Why do people insist on using ridiculous jargon?
As I reflect on my role as a developer evangelist, I aspire to be a force for developer empowerment.
Please, don't fucking empower me. Give me the tools to do my job efficiently. I don't want or need your "progressive" help in order to be "empowered."
I bust my girlfriend's (metaphorical) balls all the time about jargon. Her field is very closely tied to academia where the ability to say absolutely nothing while using jargon is extremely important. In order to sound educated or professional these people believe they need to use words that are either completely inappropriate in the context or that have been made up out of whole cloth. Whenever I read any of her work, I'm always pointing out jargon and asking why she couldn't say the same thing in simpler language. I guess it sounds like I'm sabotaging her, now that I think about it!
In computing, there is only one jargony word that you'll see used often: Deprecated. That means obscelete, btw. Hell, 90% of the people who use it say "depreciated", anyhow. Depreciated can mean obscelete, so why not just use that? Well, its too understandable. And in order to sound like a professional programmer you have to talk highly of your efforts to refactor your code base to implement the Gang of Four Memento pattern, thus deprecating existing assemblies which do not implement that functionality. Or you could say you're adding undo capability to your program, which will make some of your existing code obscelete. But that would be muuuuch too easy to understand.
Recently read a post about "why" MS will never make .NET cross platform. While the post didn't have much in the way of "why", it was chock full of "oh, come on, pleeeze!"
I can't fault the guy on that. Any time MS spends making .NET cross platform makes me a more valuable programmer. Why wouldn't I want Microsoft to throw heaps of cash into it?
You can always expect that the comments section of any post about cross-platforming .NET will be chock full of FUD and people who can't tell the difference between a S and $. Well, except for my blarg; nobody ever posts here. Reading the blarg's comments is like reading comments from a political or religious discussion. Lots of unbridled idealism mixed with damnation and hellfire. You know, like what causes people to kidnap and behead teenage schoolgirls in less civilized places in the world. Reading through the comments you can find quite a number of these "douchebags" (or as I like to refer to them as--Internet Tough Guys™).
The author seemed to put too much emphasis on MS driving people to its products as why they are not putting more effort into making the .NET platform cross-platform. I don't think that is legit. The reasons why .NET will never be ported to Linux (and to a much lesser extent to the Mac platform) are threefold:
First, the market just isn't there. According to Market Share, there are less people running all flavors of Linux than Windows 98. Now, this is just desktop users, not servers, and the numbers are gathered from web access of around 40k different websites (I don't think /. is one of them, btw). Still, as a rough number, the number of linux desktops is less than 1% of the total collected in the survey. If MS is looking to port .NET to platforms that small, then they might as well port the damn thing to the PSP, which has about 1/3rd of Linux' share.
Second, exactly which fork do you code to? One of the great things about OSS is the ease of taking a code base in a fresh new direction when it might have grown stagnant and become obscelete. Find enough like minded people and fork yourself a brave new world. The result is that there are a shitload of "flavors" of the Linux operating system currently in development. I started counting them at DistroWatch but got bored after I hit 30. They are all different in some way, which means that supporting every one of them would be impossible. MS would have to pick the greatest common denominator (or a few of them) and code to it. And how would updates get distributed? The Linux world doesn't have a single package management system; according to Wikipedia, there are at least eight systems, the best known being RedHat's RPM. Having used RPM, I can tell you it sucks ass. I am, of course, a lazy shit and didn't want to spend five hours in man trying to figure out all the ins and outs of it. So not only would MS have to code a port to Linux (and try its best to be cross-flavor compatible), but they would also have to create an update service specifically for the Linux port. Cha ching!
Third, and last, who the hell would use it? Linux is the home of OSS fanboys and Unix Beards. They hate MS with a passion and would never lock themselves into a Microsoft product. They're not exactly crazy to do so, either. Seeing how small the Linux market share is, MS has no financial gain from spending time and effort on the platform. Who's to say that, after five years of lackluster performance, MS won't drop support for a Linux port of .NET? MS is a privately held company, responsible to its shareholders. If a particular product isn't holding up its end, it would be irresponsible for MS to keep shoveling cash into it. And to top it all off, MS doesn't exactly have a good track record for its cross-platform products. Without developers with a desire to use the platform, the thing is flat out dead in the water.
Face facts--the only way a port makes sense is if it is developed by and for the OSS community itself. The project must have intrinsic value; it must not solely exist as a "port" of a microsoft product. If MS were to disappear tomorrow (God might actually listen to the nerds at /.), the project must be able to continue and serve a need in the Linux community. It has to offer more to the Linux developer than just the ability to run programs originally written for Microsoft. Only the OSS community can do this and be successful. Much respect to my Mono brothas.
God kills a kitten. Please, think of the kittens.
And I know somebody who's got a furry body count you wouldn't believe.
Someone just posted on DotNetKicks about what to expect from an interview. We're hiring an entry level position currently, so I recently spent some time thinking about interview questions and the whole interview process. I've got some suggestions you might be interested in.
Obviously, the best source for interview questions is your vanilla interweb search engine. Google is thick with 'em. Browse, read and answer. It will help highlight those areas where you aren't up to snuff.
If you find you suck at .NET, get you a copy of Jeffrey Richter's CLR Via C#. If you program in .NET, you program the CLR. Understanding the common language runtime is essential. Also has tons of info on the ins and outs and implementations of C#.
Back to interviewing... Of course, what youre asked will depend highly on the organization you're interviewing for. If you want to narrow your choices, look into their products and what technologies they use. Ask your interviewer (before the interview) about the position you're being hired for. What will you be working on? What tools will you be using? If they answer web app libraries and smart-client frontends, you know to brush up on ASP.NET and the compact framework.
Personally, there would be three areas that I would brush up on: General programming skills, specific programming skills, and tool knowledge.
For general programming skills, I'm talking about techniques that cross language barriers. For instance, know your patterns. Understand that the event model in .NET is based on the publisher/subscriber model. Know some legitimate uses for the Singleton pattern. Be able to describe the Factory pattern without sounding like an idiot.
For specific programming skills, concentrate on those areas where you are deficient and on those that you will be working on (remember what I said above). Know common patterns in .NET, such as the dispose pattern, thread-safe event invoking, and the "using" pattern. Don't get bogged down in syntax. Any interviewer worth anything shouldn't expect you to be able to write a 100% syntactically correct program without using a developer tool. If you are ever asked to write code on a whiteboard and you're not sure about a particular point in syntax, point it out and state that you are not 100% sure it is correct and would use intellisense to check syntax at this point.
For tool knowledge, review those tools the company uses. These can include program development tools such as visual studio, source control tools like CVS, and database tools like Sql Server Enterprise Manager. If you can, sit down and play with each for a few hours. Open menus and look at what's inside. Do you know what each does? If not, find out. You don't have to know it inside and out, but if they ask you, for instance, what Sql Server Profiler is, you can at least say, "I haven't used it, but it is a program you can use to see all communication between a SQL server and connected clients." That's a hell of a lot better than "Uh, um... uh, er... I think, uh, its um..."
This brings up a final point--you don't know shit. Not in the big scheme of things. Nobody does. You do know a small subset of the big picture. That's okay. Be confident in what you know. If there is anything you're not sure about, don't be afraid to say so. IRL, what would you do if you were faced with a situation where you were unsure of what something was? You would research it. When faced with something in an interview like this, state that you aren't sure about it or that you do not know. Be clear; don't equivocate. If appropriate, describe what your process would be in order to research the point. For example, you might say, "I'm not exactly sure if you can null a struct; I'd have to check my copy of CLR Via C#." Or you might say, "I actually have heard of that situation before, but I've never personally encountered it in my programming experience. I'd definitely consult MSDN prior to making any decisions."
So that's my advice about prepping for an interview. Don't forget: Google "dotnet interview questions" and read/answer them. Brush up on general programming skills. Research the company you're interviewing with for specific programming skills you'll need to study. Know your tools. And don't be afraid not to know something; just be clear and honest about it.