Archive for the ‘Uncategorized’ Category
Someone recently remarked to me that you can think of hardware as software that’s developed really slowly. While the software space has been going wild over cloud computing it’s been pretty quiet on the hardware side of the equation. But, that’s going to change as we see a new class of server hardware that helps businesses take advantage of the power and density savings possible through new CPU architectures and software stacks.
As an illustration IDC reported on the server market recently and it shows the start of the next wave of change. As you’d expect the general server market is pretty poor, growing at just 2.7%. But, blade servers which are commonly used for Web workloads is growing at 7%. Finally, the hyperdense form-factor is growing at 29% – which is an astounding amount.
In some ways the drivers for this change are just the continuation of a long-running story where everything is (has?) moved into a Web infrastructure set-up which enables the horizontal scaling of services. Implicitly this favours buying a lot of cheaper systems and building in redundancy at the software level. But the Cloud accelerates this trend further since it’s stateless and you no longer care about the specifics of the hardware layer in the same way.
The challenge for infrastructure managers is that continually adding more servers means you’re incurring ongoing costs for electricity, space and management. So anything that can drive better performance per watt in a denser arrangement is interesting. As you can see from the diagram below the expected growth in this space is really significant.
At a CPU architecture level ARM chips have been getting more powerful and this year they’re going to enter into the mix for servers. The first reason for this is that they’re relatively low-power which means lower running costs. Since they’re low power they also give off less heat so another advantage is they can be put into a ‘hyperdense’ arrangement that also saves money in terms of space. You’ll see systems this year from both Dell and HP (see Moonshot). It’s pretty astounding to think that the same chip that’s powering your phone could be powering Facebook!
If we’re truly going to get the benefit from the new hyperdense form-factor then the software layer will also need to reflect the capabilities of these systems. So for Ubuntu we’re continuing our work on ARM and recently announced the availability of 12.04 LTS as an ARM server – the first commercial Linux to come to the platform. We’re also exploring how these hardware systems unique strengths are expressed and how this impacts the software stack. For example if you’ve got a few hundred systems in a half-rack then the problem of managing those systems is far more significant – so service orchestration (such as Juju) is really critical. It’s exciting times in this space and a really interesting project.
If you’re interested in a quick summary of ARM server check out this Prezi by Victor Palau.
Recently I attended the gaming conference Develop 2011 in Brighton. Digital entertainment (movies/music) is something Ubuntu users are excited and interested in. This means there’s an increasing opportunity for developers to create applications that those users want. So understanding the challenges, concerns and opportunities the gaming industry faces and how that might apply to Ubuntu was my focus during the conference.
Perhaps the most immediate thing that struck me is the burgeoning importance of online games. Nick Parker gave an interesting talk on funding development. The slide that stood out the most was one that showed ‘core gaming‘ (think PS3) has now peaked and that online (casual, MMOG, mobile and social) gaming is the real driver of growth for the industry. He pointed out you’re still talking about a core gaming market that’s hundreds of millions of dollars in size but nonetheless the traditional vendors haven’t yet grasped the online opportunity.
Generally, it’s difficult for new platforms to break through into games developers consciousness. At a basic level creating games is risky and expensive so develpoers target platforms with the maximum possible number of sales. To some degree online games offer a way out of this conundrum for alternative platforms: if the browser is treated as the platform then all operating systems have an equal chance. The devil is in the detail depending on the technologies used, Flash is fine from a Linux perspective, WebGL could be great but plugins (such as Unity browser plugin) are more of a problem. Perhaps the best talk I saw which combined these trends was done by Ikka Paananen who talked about the opportunities for immersive play within the browser. If you want to find out what he means try Supercells game Gunshine which works in a browser on Ubuntu just fine – in fact I lost a Sunday afternoon to it!
There also seems to be a lot of optimism about the opportunities for interesting games development: a lot of positive commentary around the opportunities around despite the wider economic conditions. A big part of this was around Indie development, with small teams able to create so much for a relatively small level of investment. A talk by Tony Pearce about raising cash for your game (supported by NESTA) illustrated this, not only was it a great talk but it was absolutely packed with developers.
Reinforcing the positive theme was a very motivating keynote given by Michael Acton Smith the CEO of Mind Candy, the company behind the super-hit Moshi Monsters. First, I’m embarassed to admit that I hadn’t heard of Mosh Monsters, it turns out that if you’re a parent then you know all about them – it’s that big! Of course, he was head-lining because it’s such a massive hit and with a suitably dramatic story where at one point they almost burned out. But, much of his talk’s insight could have been applied to any start-up or group creating new products. I heard two key things, one was that you you should explore the boundaries of your space with creativity, the other one he didn’t say directly but I was struck by how deeply he’d thought about the mechanisms and drivers that power his business. From a pure inspiration perspective the main sense was the essential energy the team brought to the journey as they explored (and continue to explore) creating something for their users. So there it is – explore creatively, think deeply and be energised!
Apple finally announced iCloud, reinforcing that the Cloud is ready for consumers. It validates some of the things we’ve been doing in Ubuntu and encourages us to think about how the trend will impact free software in the future.Cringley focuses on Apple targeting Microsoft by making the desktop category just like a device and moving everyone onto the Internet. Steve Jobs is quoted as saying:
“We’re going to demote the PC and the Mac to just be a device – just like an iPad, an iPhone or an iPod Touch. We’re going to move the hub of your digital life to the cloud.”
I don’t know if this targets Microsoft, I do know that Apple has done as much as anyone to make the network a central part of our digital life.
It’s clear that we all spend more time online – if you stand-back you can see see our increased dependence on the Web (we spend more time on-line than watching TV), along with how central some web apps are becoming to our lives (from Facebook to Google calendar). You might question how quickly this is happening or how widely spread: there’s not much bandwidth in Africa, and I often find it surprising how poor connectivity is in rural areas. But, that’s just a question of timing – large numbers of users already think of their computers and the Web as being synonymous.
The Web itself is rapidly becoming the standard development platform and storage medium for applications. With HTML5 and its’ extended technologies we will see increasingly complex and capable web apps: this Financial Times HTML5 app is a nice example and tweaks Apple’s tail! Even if the interface of everything can’t be a Web front-end, then data storage is also moving in that direction: increasingly users think of their content as being ‘available’ everywhere – meaning online.
From a user perspective this means we all expect to access our favourite applications and our personal data at any point from a myriad of devices. The impact on Windows is that the field is being reset, both at a software and a hardware level. Microsoft is not a cherished consumer brand that everyone loves so they will have to start over. But, it equally impacts anyone that wants to create a general operating system – Ubuntu being my concern.
If everything is on the network, and the network provides many of the applications then there’s going to be a fundamental set of shifts in how the system stack supports the user. Among the many areas, two things stand out for me.
The first theme is that we need to provide ways for users to store and access their content online. We’ve seen Apple’s system, we’re bound to see systems from all the titans of the industry as well at a lot of start-ups. This could be fantastic for users, but there’s also potential for drawbacks if there’s no standardisation – we don’t want to go back to a world of locked in data.
But it’s deeper than data, users don’t think “I need my data” they think “I want my photos of Nancy the dog” which means we need to attach storage and applications together. That’s why in Ubuntu One we talk about the personal cloud and we’re providing both applications and API’s to build on top of basic data storage and sync. Any data storage (including Ubuntu One) also needs to be available across multiple platforms so that our users can access their content whenever they want or need it. Importantly, to make the Cloud the central storage location it needs to be fully integrated and seamlessly part of the users experience – going to the ‘Web folder’ is a fail!
The second theme is that the operating system will be a window onto the Web, and this changes what it needs to present to the user and the services it provides to applications. From a user perspective we need to integrate the Web so that there’s no difference between local and network applications. Moreover, some of the metaphors of the Web are impacting how users think about interacting with their computers, take search as an example.
For applications to be truly integrated it will mean that the system stack will need to provide services that web application developers can use. For example, rather than signing into a myriad of different web applications how can the system stack authenticate me to them seamlessly. Perhaps even the idea of local and web apps will need to disappear, if we can provide technologies that help web application developers create applications that work both locally and through the network.
A final thought, I said at the start that Apple has done as much as anyone to make the vision of a connected world real. But Unix and Linux has done even more – network computing is central to our technology, and distributed community is central to our ethos. For me this means Ubuntu has great strengths it can draw on as we create this future – Ubuntu can be the operating system for the rest of us in a connected world!
When I started using Linux in the mid 90’s almost all of the developers were part-time, even Linus Torvalds had another job and did kernel development part time. These days many of the core Linux projects (from the kernel through to Firefox) have full-time paid developers. Consequently, Open Source has been able to progress more in the last 10 years than the previous 20 years of development.
However, there’s still no independent software industry around the platform: I mean by this development shops that create application software for Linux. This is a problem. On the developer side it means we often lose programmers to other platforms as they move across to a space where they can earn a living. On the user side it prevents a range of users from using Linux because the range of software isn’t there to suit their needs. Both sides of this equation need to be solved for Linux to become a mainstream desktop platform. It’s these long terms problems that we’re trying to impact through the software center and the related threads.
If you look at other platforms, such as Mac, you see there’s a strong hobbyist or casual developer group who are very influential. This group has often made the most exciting, compelling and breakthrough applications and utilities. When the Mac desktop wasn’t cool (ie OS 9) this group kept the platform alive by creating great new software for users. It’s also this type of developer who was the first to adopt the ipod/iOS application space and who has been so important advocating the platform. These developers create software because they love doing so, and they get a positive kick out of the direct and indirect appreciation from users.
Free Software developers often cite the community aspect of being in the open as a driver for working on software. But there’s fewer ways for a user to show direct appreciation for the work. So we’ve been thinking about adding the ability for Ubuntu users to donate to free software applications that they love. It will provide a way for users to show their appreciation, and this positive feed-back will encourage the developer to keep cranking out great software. My expectation is that the value of donations will be in users showing their love and that it will provide for the odd “free beer”.
From a user perspective the experience will be that they’ll switch on donations and charge their account. They’ll then be able to donate to individual applications within the software center. It should be a straightforward user-experience but the variety of requirements to fulfill this properly is complex. There’s core problems like storing money within an account, how you process transactions and what the user can see within Software Center – I’m sure they’d like to see what they’ve donated to for example.
On the developer side of the equation the experience should be that a software project registers for donations and provides financial details. Then when some set of donations is received it’s paid out to their account. The core part of the discussion at UDS was around how you identify the right person to give the donations to. This a difficult problem. The proposal for the initial release is that we’ll switch on donations for software which has a foundation and has the legal structures to receive donations.
It’s a really exciting idea and one I believe will make a difference to encouraging free software. If you’d like to give feedback or track development add yourself to the blueprint.
One of the impacts of everything going digital is that the amount of data we store and use is exploding. This gets a lot of attention in the Web 2.0 area, but it’s equally true in enterprises. In many ways databases have been revolutionised by Open Source, I find it hard to imagine the Web without MySQL or Postgres.
That’s at the volume and scale end of the spectrum, in the hard-core enterprise Oracle and IBM remain the power houses that corporate customers use for their mission critical deployments. As a market it’s worth 19 billion dollars according to an IDC report: Oracle has 44% of that market, and second is IBM DB2 with 21%. So I was very happy to see that the IBM DB2 team has certified Ubuntu 10.04 LTS for IBM DB2 7.2 .
This means that the whole suite of DB2 Enterprise Server Edition, DB2 Workgroup Server Edition, DB2 Personal Edition and DB2 Express Edition are validated on Ubuntu 10.04. It’s obvious that this is an important validation for Ubuntu as it demonstrates that the IBM DB2 team believes Ubuntu is an important platform to validate against. That’s not new, as IBM previously validated 8.04 LTS, but it’s worth drawing attention to because the enterprise server space is conservative and this shows IBM’s long-term commitment.
In terms of how partners interact with Ubuntu it’s also pleasing the way that the IBM DB2 team has been able to efficiently update the certification to the next LTS release. The fixed release cycle, every two years, means they know exactly when the next LTS will be available and can calendar it into their development cycles. And that is a benefit that’s important for ISV’s because validation is expensive and uncertain, so making it that little bit easier is a good thing!
While we’re on DB2 I’ll point you at the DB2 Enterprise-C virtual appliances on Amazon EC2. The objective is to enable developers who are already using DB2 on Ubuntu to have an option on the Amazon cloud, and for those that love Ubuntu and would like to try DB2 an easy route to do so. So check it out!
Irving Wladawsky-Berger is an interesting technologist and strategist, whose blog is worth reading if you have spare cycles for good quality input. He’s known for having been deeply involved in many of IBM’s technical strategy decisions, for example he was a key actor in their Linux strategy. He officially retired this year and Eweek did a wide-ranging interview that finally made it to the top of my reading list. A key quote that struck me as true is when he’s asked where he got his vision from:
“The answer is easy: Find where the smart people are and hang out with them. I’m serious. The smart people have a lot of ideas …”
IBM has resources both money and brain-power that are far beyond those of most organisations. But it strikes me as true that if you find smart people with a range of views and get into an exchange of ideas then you’ve got a much better chance of doing something amazing. That’s definitely true of Open Source, but it applies generally. He continues,
“But the way I looked at it a new idea was whether it was something we should do, and then how we should do it in IBM. Because just because it’s something we should do, doesn’t mean we have to do it like everybody else is doing.”
This second point is really important to me, although it’s often difficult to practise. Sometimes, the accepted way of meeting a need is the right way to do it. Sometimes people want a better mousetrap, not an entirely new mouse removal system. Perhaps the problem is well understood, and there are no better approaches, or at least none worth the effort for the benefit.
But generally, if something is worth doing, it’s worth examining it from the underlying principles and wondering whether it can be done with a different approach. It’s really the only way to develop something really innovative and different.
The rub seems to be when do you do the former, and when the latter, perhaps there are ways to combine the two.