Here’s the posting of the latest recording I did right bofore the holidays!

Enjoy! and happy new year.

http://technet.microsoft.com/en-us/edge/how-microsoft-it-is-enhancing-existing-services-and-offering-new-services-with-near-infinite-scalability-on-demand.aspx

Advertisements
Posted by: Abel Cruz | November 20, 2011

2006 Archive Interview with Siemens PLM (a.k.a. UGS)

2006 Archive Interview with Siemens PLM (a.k.a. UGS)

In this interview, you will meet David Mitchell, who at the time was Vice President, Technology Officer, UGS, and SQL Server development lead and query optimization guru, Conor Cunningham.  I served as the moderator of the discussion.  At the time of this interview my role was Senior Program Manager in the GLobal ISV organization at Microsoft.

We talked about the implications of SQL 2005 on UGS (a.k.a. Siemens PLM) success as well dive into other technical aspects of the UGS/Microsoft alliance.

This was a frank and open discussion with an enterprise customer who relies on our platform and technologies to be successful.

Enjoy!

This is a great and very interesting Washington Post article (http://www.washingtonpost.com/wp-dyn/content/article/2011/02/18/AR2011021805784.html)

I agree that the government, in general, waists too much money.  What I like about the article is the fact it shows the government making an effort to send less in a way that actually makes sense.

Enjoy!

Posted by: Abel Cruz | November 9, 2011

MIT Enterprise Forum: Cloud by Example

Coming up on Wednesday November 16th, 2011

Exploring the Business Case for Cloud Computing

Sponsored by

clip_image002clip_image008

clip_image006clip_image004

Cloud by Example: From Hype to Reality

When the benefits of cloud computing ultimately depend on navigating an immature marketplace filled with hype generated by nearly every technology provider, it may be challenging to distinguish exactly what technology solution lies beneath marketing keywords such as infrastructure virtualization, elastic computing platforms, and public, private, and hybrid clouds.

How does any business find just the right provider and the right solution for its needs, while avoiding the very real pitfalls of cloud computing?

Our panelists will share their real-world experience with evaluating and implementing cloud technologies.

Moderator: Tim Porter, General Partner at Madrona Venture Group.

Panelists:

  • Damon Danieli, Founder & CTO, Z2Live, Inc. (use case on deploying social / gaming applications)
  • Ingvar Petursson, Sr. VP, Information Technology, Nintendo (use case on migrating enterprise applications)
  • Ashwin Muthuvenkataraman, Senior Product Manager, Expedia
  • JB Brown, Manager, Nordstrom Innovation Lab

REGISTER AT

http://bit.ly/mitseattlecloud

We look forward to seeing you there!

This post is the third in a planned series about Windows Azure. Over the last two post, I have been talking about how you can adapt an existing, on-premises ASP.NET application—like the Partner Velocity Platform (PVP) which is the engine that drives all partner-related functions behind the Microsoft Partner Network’s (MPN)—to one that operates in the cloud. The series of posts are intended for any architect, developer, or information technology (IT) professional who designs, builds, or operates applications and services that are appropriate for the cloud. Although applications do not need to be based on the Microsoft Windows operating system to work in Windows Azure, these posts are written for people who work with Windows-based systems. You should be familiar with the Microsoft .Net Framework, Microsoft Visual Studio, ASP.NET, and Microsoft Visual C#.

This post walks you through the first set of steps we used when deciding to migrate the PVP Platform to Windows Azure. You’ll see some example of how to take an existing business application, developed using ASP.NET, and move it to the cloud. This first stage is only concerned with getting the application to work in the cloud without losing any functionality. It does address some “big” issues, such as security and data storage that are relevant to almost any cloud-based application.

I’m not going to explore how to improve the application by taking advantage of some of the features available on the Windows Azure platform. In addition, the on-premises version of the application that you’ll see is based on a real application yet not the actual one we use in production; it contains just enough basic functionality to get started. In later postings I’ll try and discuss how to improve the application by using some of the features available on the Windows Azure platform, and we’ll explore adding more features to the application. For now, you’ll see how to take your first steps into the cloud.

Principle

As I mentioned in earlier posts, the Partner Velocity Platform (PVP) is the engine driving and meeting the technology needs of the Microsoft Partner Network (MPN). The application is built with ASP.NET , deployed in Microsoft’s data center, and is accessible from both Microsoft’s intranet as well as the Internet. The application relies on the federation of Windows Live with the Microsoft corporate Active Directory directory service to authenticate partners.

Goals and Requirements

During the first phase, we had a number of goals for the migration of the PVP application to the cloud that I’ll choose to summarize as “Getting it to work in the cloud.” Optimizing the application for the cloud and exploiting the features of Windows Azure was secondary at this stage. To this end we developed many mini proof-of-concepts (POC) to move specific and discrete functionality of PVP from on-premise to Azure.

By taking this approach we were able to learn how to take advantage of Azure without jeopardizing down time in the production environment. We identified some goals to focus on in this first phase. The PVP application in the cloud must be able to access all the same data that the on-premises version of the application can access. This includes the business rules and components we would not be migrating during the first phase. However, because services and not users would be calling application components in the cloud we also needed to add service-level authentication between the on-premise and cloud components.

A second goal was to make sure that operations staff had access to the same diagnostic information from the cloud-based version of PVP as they have from the existing on-premises version of the application.

A significant concern that MSIT has about any application is security, so a third goal is to continue to control access to the PVP application based on identities that can be verified from within Microsoft’s infrastructure, and to enable users to access the application by using their existing Windows Live credentials. MSIT does not want the overhead of managing additional security systems for its cloud-based applications.

Overall, the goal of this phase was to migrate PVP to the cloud while preserving the user experience and the manageability of the application, and to make only a control set of changes to the existing application.

Overview of the Solution

The first step was to analyze the existing application to determine which pieces would need to change when it was migrated to the cloud. Remember that the goal at this stage was to make portions of the application work in the cloud while making sure the on premises portion of the application continue to work as if nothing had changed, including and especially the user experience. Any change in the on-premise application requires a release which is a very controlled process and adds time and potential delays to the development cycle. Besides, the services provided by MPN cannot be interrupted. It’s like having two trains, an old train moving on an old track and a new train moving on a new set of tracks. As time moves forward some functionality, represented by an actual train cart, moves from the old train track to the new one until all train carts have moved to the new train track. At that point, the old set of train tracks and the old train are decommissioned.

Train

After analyzing the application we determine the hybrid architecture (the one we would have after our first release) should look similar to the following.

Architecture

For our first release we decided to move the Assets and Requirements to the cloud. By doing this we were forced to also create some sync services to keep the data synchronized between the rest of the on-premise systems and the new implementation on Azure.

As mentioned before, the PVP application stored Assets in the SQL database. But because SQL Azure is a relatively expensive storage mechanism (compared to Windows Azure table storage), and because the data representing Assets is very simple and non-relational, the team decided to use an Asset provider implementation that used Windows Azure table storage. Switching to a different Asset provider, as you can imagine, did have some consequences, including data transformation and data synchronization. We may talk some more about this later as we get deeper into some of the specifics.

We also needed to add a new way for authentication between the on premises components and the Azure-deployed ones (i.e. services, worker-roles, etc.). Remember the new authentication method I mentioned in previous posts? Well, here is a way to do this. We wanted to make sure we minimized authentication work so…here comes ADFS/ACS trust to the rescue! Will Perry from Microsoft created a great post that describes how to set this up. You can read all about it at http://blogs.msdn.com/b/willpe/archive/2010/10/25/windows-authentication-adfs-and-the-access-control-service.aspx. But, here’s a high level overview of what we needed to do.

The Service we plan to expose on Azure is a WCF OData service, which only works with SWT tokens.

ADFS ACS

1. In this picture the client app contacts ADFS to get authenticated.  Here are some things to remember about this configuration:

· The ADFS server, which we configured to trust ACS similarly to the way Will Perry describes in his post, will only be used to authenticate this one single identity – that runs the on-premise Windows NT service used to interact with the Azure Service.

2. ADFS will take the Kerberos request and it will provide a SAML token with the proper redirection to ACS.

3. With the new SAML token the client app now contacts ACS 2.0 to get validated.

4. ACS validates and provides a SWT token back.

5. Finally with this SWT token the client application can directly go to the OData services running on Azure to do its intended job.

The following diagram shows the data flow of this token-transformation process which is great for such type of authentication mechanism.

Data Flow

Just Peeking Inside the Implementation

Perhaps now is a good time to look at some examples of the things we had to do without actually giving away the actual implementation for the next version of PVP (a.k.a. PVP.Next). But first let me provide you with an analogy for MPN that you may already be familiar with.

I’m sure many of you are familiar with the American Express Rewards Program. The premise of this program is rather simple. You buy things using your American Express, they give you points for every dollar you spend on your AMEX, and later you can redeem those points for other items like car rentals, hotel rooms, consumer electronics, etc. based on the number of REQUIRED points for any given item. This program has assets; the $75 annual fee you pay for the program; it has requirements, the legal agreement you sign and the payment you make; and finally it has enrollment, which only can happen once you agree to the terms and you pay the fee.

Now translate that program into MPN language and you will find them to be very similar. In the MPN world, we have assets, requirements and enrollments.

Assets are what Microsoft Partners bring, earn or achieve in order to qualify for enrollment into any number of programs or to receive benefits in the Microsoft Partner Network. Examples of assets include having individuals in the partner’s organization who are Microsoft Certified Professionals (MCP). Another example is specific training the partner has completed. Another one is reference(s) of projects they have delivered to their customers in the past.

Requirements are just a set of business rules defined by MPN business owners that define what assets are required in order to qualify for specific program enrollments and/or program benefits. These business rules are expressed as a set of metadata constrained by logical operations, date values and other pertinent information.

Enrollments are represented in the database as value flags that are set on or off depending on whether or not a specific partner has earned the necessary requirements to be enrolled into any give program and/or receive any specific program benefit. There is an enrollment status which changes state depending on factors like enrollment period, re-enrollment period, earning or losing assets, program fees, etc.

I think that by just reading this very long description of some of the “things” inside the PVP Partner Database, you are starting to imagine the level of complexity we have created inside this one critical component of the architecture. Can you imagine having to keep track of every asset status (i.e. valid, expired, verified, etc.), compute requirement status, enrollment status, of millions of assets for several hundred thousand organizations with millions of associated individuals, all of that inside a single database including the actual requirements and enrollment evaluation “engines” in the form of store procedures? What if we want to add more partner organizations? The number of computations just keeps growing exponentially limiting resources in the system and preventing system users to have a smooth experience. So we started by moving Assets to Azure table storage, as we mentioned above. We also moved the requirements evaluation, which evaluates the assets, into its own “engine” to Windows Azure in the form of an Azure Worker Role.

If you want to get some hands-on experiences creating your own Azure Worker Role you can follow some of the samples located at http://msdn.microsoft.com/en-us/evalcenter/SPAzureTrainingCourse_SPWorkflowAzure or http://msdn.microsoft.com/en-us/hh285883 and many other similar places online.

Connecting to SQL Server

Before the migration, PVP used primarily SQL for data storage. In this first phase as well as in the future, we moved some of the relational data to SQL Azure and other to table storage. However, during the migration the SQL Azure database schema was changed to accommodate future functionality. Because of this we could not just change the connection string. However, the way to connect to SQL Azure is still very simple. We make the changes in the Web.config file similarly to how it is shown below:

<add name=”aPVPnext” connectionString=
“Data Source={Server Name};
Initial Catalog=aPVPnext;
UId={SQL Azure User Id};
Pwd={SQL Azure User Password};
Encrypt=True;
TrustServerCertificate=False;”
providerName=”System.Data.SqlClient” />

Notice that the values of Server Name, SQL Azure User ID, and SQL Azure User Password are specific to your SQL Azure account.

There are two things to notice about the connection string. First, notice that because SQL Azure does not support Windows Authentication; the credentials for your SQL Azure account are stored in plain text. You should consider encrypting this section of the Web.config file. This will add to the complexity of your application, but it will enhance the security of your data. If your application is likely to run on multiple role instances, you must use an encryption mechanism that uses keys shared by all the role instances. If you want to learn how to encrypt configuration sections in your Web.config file, read the article, “How To: Encrypt Configuration Sections in ASP.NET 2.0 Using RSA,” on MSDN at http://msdn.microsoft.com/en-us/library/ms998283.aspx.

The second thing to notice about the connection string is that it specifies that all communications with SQL Azure are encrypted. Even though your application may reside on a computer in the same data center as SQL Azure, you have to treat that connection as if it was using the internet.

By now, this post is starting to get long yet I still feel I want to talk some more about things like how to handle dropped connections, or SQL Azure connection timeouts. It is also super important to talk about diagnostics and rising events such that you can monitor the application. But I’ll wait to talk about those next time.

Until next time…

This post is the second in a planned series about Windows® Azure™.  I will attempt to show how you can adapt an existing, on-premises ASP.NET application—like the Partner Velocity Platform (PVP) which is the engine that drives all partner-related functions behind the Microsoft Partner Network’s (MPN)—to one that operates in the cloud.  The series of posts are intended for any architect, developer, or information technology (IT) professional who designs, builds, or operates applications and services that are appropriate for the cloud.  Although applications do not need to be based on the Microsoft ® Windows® operating system to work in Windows Azure, these posts are written for people who work with Windows-based systems.  You should be familiar with the Microsoft .Net Framework, Microsoft Visual Studio®, ASP.NET, and Microsoft Visual C#®.

In this posting I’ll provide some information regarding MPN’s current infrastructure, some of the software behind the scene, and why MSIT decided to start moving the MPN’s PVP platform to the Windows® Azure™ platform.  As with any company considering this process, there are many issues to take into account and challenges to be met, particularly because this is the first time we use Azure for an application of the size and complexity of PVP.

Challenges

Like many other business contemplating the move to the cloud, the Microsoft Worldwide Partner Group (WPG), which owns MPN, faces several challenges. Currently, deploying new on-premises capability or even modifying an existing one takes too long.  Considering how quickly WPG’s business changes, the timeframe for developing, provisioning, and deploying even simple application can be at least several months. No matter the application’s complexity, requirements must be analyzed, procurement processes must be initiated, proposed scope needs to be negotiated with MSIT engineering, and networks must be configured, and so on. WPG must be able to respond to its customers’ demands more rapidly than the current procedures allow.

Another issue is that much of PVP is used inefficiently. The majority of its servers are underutilized throughout the year, except during some limited times of the year, and it’s nearly impossible to deploy new applications with the requisite service-level agreements (SLA) to the existing hardware.  Virtual machines are appropriate in some cases, but they are not appropriate in all cases. This inefficiency means that capital investment is committed to an underutilized infrastructure when it could be better used elsewhere in the business.

A final issue is that the existing PVP platform is an amalgamation of applications, data sources, services, components and user experiences all fastened together by various technologies and a large number of business rules all contained and managed within a single SQL database.  The user experience accessing the different PVP applications varies depending on the application.  In addition, adding new functionality, requirements, onboarding of new business programs, and the addition of more and more partner companies with their corresponding employees, makes the PVP database a big liability for the architecture.

By moving the PVP set of applications to Windows Azure we believe we can take advantage of economies of scale, promote standardization of all PVP applications, and have automated processes for managing them. Most importantly, we believe that this will make us more effective at addressing our customers’ needs, a more effective competitor, and provide better ROI to WPG.

Goals and Concerns

One of our goals is to improve the experience of all users of PVP applications.  At a minimum, applications in the cloud should perform as well as the on-premises counterparts. The hope, though, is that they will perform better. Many of PVP applications are used more at some times than others. For example, partners use the Partner Portal, the Partner Membership Center, and few other applications several times during the year while many other applications are not used very much. We would benefit if the critical applications had increased responsiveness during peak periods. This sensitivity to demand is known as dynamic scalability.  However, on-premises applications that are associated with specific servers don’t provide this flexibility. We can’t afford to run as many servers as are needed during peak times because this hardware is dormant the rest of the time.  If these applications were located in the cloud, it would be easy to scale them depending on the demand.

In addition to our concerns about security, we have two other issues. First, we would like to avoid a massive retraining program for our IT staff.  Second, very few of PVP applications are truly isolated from other systems. Most have various dependencies. We have put a great effort into integrating our systems, even if not all of them operate on the same platform. It is unsure how these dependencies affect operations if some systems are moved to the public cloud.

Finally, there are the two obvious goals.  Cost reduction and corporate strategic alignment.  The hope is that MSIT Engineering can reduce capital and operating cost by a significant amount.  And as we seek to have millions of people are using cloud services from Microsoft; as a company, we’re all in which means direct alignment to a pillar in the company strategy.

Strategy to Address Concerns with PVP Platform

Very few technology companies in the world are as innovative and open to new technologies as Microsoft is.  However, when it comes to internal enterprise-level applications and solutions Microsoft is just like many other businesses; it must take careful steps when it implements new technologies and ideas to avoid disruptions to business services.  As such, our plan centered on the evaluation of applications that could help the team gain initial experience on the technology, and then expand on what it has learned across other set of applications.  We call this strategy “try, learn, fail fast, and then optimize.”  We decided to start with the Partner Velocity Platform set of applications.

The Partner Velocity Platform (PVP)

The Partner Velocity Platform (PVP) is a self-service suite of applications supported by MSIT which drives online support of the Microsoft Partner Network.  Centered on recruitment and partner retention/satisfaction PVP is a scalable ecosystem that distinguishes partner offerings, provides incentives and delivers benefits aimed principally at extending the breadth of Microsoft’s partners’ market reach.

PVP applications are by definition supported and maintained by MSIT.  Outside of PVP there are number of other dependent applications used by partners and Microsoft business units in support of the Microsoft Partner Network.

PVP Architecture

The architecture is straightforward.  Not necessarily ideal nor without complexities.  Yet simple in its design in the sense that it uses a single large database (partner database) to store data, business rules, profiles, etc.; while other PVP-related applications/services simply read and/or write data to/from the SQL database as needed to maintain the integrity of the system and perform its own set of functionality.

The PVP architecture leaves, for the most part, the UI and user experience to the interfacing application.  This creates a fragmented user experience which is one of the motivating factors for re-architecting and migrating PVP, but this topic is outside the scope of this and related posts.

Figure 1 – PVP Partner Database Functional Diagram

We broke the re-architecture and migration process into a multi-year, multi-release project.  The first project phase was delivered on 8/6/2011 and the specifics of this phase will be the content of my next post.

Until Next time…

In 2009 Microsoft released the Windows Azure platform, an operating environment for developing, hosting, and managing cloud-based services.  Windows Azure established a
foundation that allows customers to easily move their applications from on-premises
locations to the cloud.  Since then, Microsoft, analysts, customers, partners and many others have been telling stories of how customers benefit from increased agility, a very scalable platform, and reduced costs.

This post is the first in a planned series about Windows® Azure™.  I will attempt to show how you can adapt an existing, on-premises ASP.NET application—like the Partner Velocity Platform (PVP) which is the engine that drives all partner-related functions behind the Microsoft Partner Network’s (MPN)—to one that operates in the cloud.  The series of posts are intended for any architect, developer, or information technology (IT) professional who designs, builds, or operates applications and services that are appropriate for the cloud.  Although applications do not need to be based on the Microsoft ® Windows® operating system to work in Windows Azure, these posts are written for people who work with Windows-based systems.  You should be familiar with the
Microsoft .Net Framework, Microsoft Visual Studio®, ASP.NET, and Microsoft
Visual C#®.

Introduction to the Windows Azure Platform

I can spend tons of time duplicating what others have already written about Windows Azure.  But I will not.  I will, however, provide you with pointers to where you can get great information that will give you a comprehensive introduction to it.  Other than that, I will concentrate to add to what others have written and provide context as it pertains to the migration of the PVP platform to Windows Azure.

Introduction to the Windows Azure Platform provides an overview of the platform to get you started with Windows Azure. It describes web roles and worker roles, and the different ways you can store data in Windows Azure.

Here are some additional resources that introduce what Windows Azure is all about.

http://www.microsoft.com/en-us/cloud/developer/resource.aspx?resourceId=what-is-windows-azure&fbid=D8OAKGXSBYP

http://www.microsoft.com/en-us/cloud/developer/resource.aspx?resourceId=introducing-windows-azure&fbid=D8OAKGXSBYP

There is a great deal of information about the Windows Azure platform in the form of
documentation, training videos, and white papers. Here are some Webs sites you
can visit to get started:

In my next post I will setup the stage and tell you about the PVP Platform, MPN, the challenges Microsoft IT faces with the current infrastructure and code behind the PVP platform.  We will discuss some of our goals and concerns and discuss the strategy behind the move of PVP to Azure.

Until next time…

Posted by: Abel Cruz | September 13, 2011

How Low Cost Is That Low-Cost Cloud?

I found this very interesting read regarding the cost of on-premise vs. cloud applications (http://www3.cfo.com/article/2011/9/the-cloud_figuring-cloud-costs)    I’m sure you will arrive to your own conclusion regarding its content.  However, here at Microsoft we have found a huge return on our Windows Azure investment.

The following chart show the Business Investment vs maintenance Cost if we continue using the current and traditional on-premise solution for the Microsoft Partner Network (MPN).

Notice that by FY14 our maintenance cost would have been approaching 50% of our total cost!  Now compare that with the costs associated to moving to Azure over the same period!  The chart below shows that.

By moving to Azure we aim to devote a greater share of our IT investments to business enablement and innovation initiatives.  Historically, we have not made innovative investments for the Partner Velocity Platform (PVP) which fuels the Microsoft Partner Network.  Maintenance costs are increasing year-over-year and we are spending less on business enablement.  With a very small initial IT investments which started in FY12 to support innovation over a 3-year span incubating a new platform yields $1.4M in reduced annual Maintenance costs or $4.3M net savings.  This clearly shows a way to optimize future IT investment funding for business enablement.

When you combine Business Enablement and Innovation we get about 72% business value with a cost of approximately 28%.  Not bad, is it?

I’m in the process of writing a paper to discuss in more details some lessons learned and the value we are seen as we move the Microsoft Partner Network to Azure over the next few years.  Our first milestone went live on Azure back on August 6th.  Other deliverables are yet to come later this year and over the next two years.

Stay tuned!

Posted by: Abel Cruz | September 12, 2011

How the Future Affects Your Business

In this post I want to explore the future as a business problem and things you should consider when driving IT strategy in your organization.

As I was considering how to best approach this, I got to thinking: “what is a roadmap”?  And my answer is: “it’s an attempt to see into the future”.  From there, I thought it might perhaps be useful to start with a focus on the future as a business problem, because the real problem your businesses face isn’t really “what will Microsoft, or any other technology company for that matter, release in the future?” – It’s the future itself – and specifically that you can’t predict it with any certainty.  So since CIOs and IT professionals alike are being asked to align IT with business, if the future is their problem it’s also our problem.

The business problem here is that not only can you not reliably predict the future, but even the best educated guess is sometimes well off the mark.  For example, I think even the most grudging critic would concede that Bill Gates has a pretty good track record as a visionary.  However, the anything-but-Microsoft community still celebrate his famous remark from 1993: “the Internet?  We’re not interested in it”.

If someone who has correctly predicted as many trends as Bill Gates can get it wrong, what hope do the rest of us have as we try to predict our organisations’ needs and plan accordingly?

Luckily, organisational success doesn’t depend on getting everything right – for example, Microsoft is still here!  One of the early reasons we’re still here is that within 18 months of Bill Gates making that remark, he’d realised he was wrong, and more importantly, he re-engineered the company so that every Microsoft product, and there were a few even then, had new functionality that took advantage of the Internet.

Success depends on being able to adapt quickly to a new and unexpected reality

I’m sure you have examples from your own businesses that also exhibit this: business success doesn’t depend on getting everything right the first time, but it does depend on being able to change the business to adapt to a new reality quickly.  Or, to borrow the phrase from Alan Martin, of LV Martin & Son: “It’s the putting right that counts.”

Our particular challenge in IT is that we’re usually on the cusp of enabling business change, and usually under pressure to deliver it very quickly, but we don’t usually create the need for it.  That comes from events that we often have less visibility of: a change in the market, a change in the CEO, M&A activity.

Take a look at Figure 1 from the nostalgia vault.

In essence, you see that as technology underpins more and more of business process, it becomes more and more of a blocking factor to change.  Conversely, the more flexible your information infrastructure is, the easier it is to change the processes it supports.  By ‘information infrastructure’, I mean, obviously, the systems that connect people to information and processes.

We know this, and it’s not new thinking, of course – but what has changed is the fact that until recently pretty much all infrastructures were custom built and relatively inflexible.  Today, there is real contrast between an IT infrastructure that blocks unexpected change and an IT infrastructure that enables it.


Figure 1 – Traditional IT Innovation Priorities

Success: Realising outcomes that are going to be fluid on an on-going basis

So there’s the rub – the topic of this post is about the future, but I’d suggest that the challenge you have right now is that what you are building as a solution for today’s needs is also the infrastructure for a potentially unknown future requirement.

Running an IT shop is, inherently, like trying to manage a project for which the scope is not locked down, or for which the required functional outcomes are changing on the way through.  Project managers, of course, run a mile from this sort of thing.  CIO’s, on the other hand, knowingly sign up for it.

Building an agile information infrastructure is about trading off those two areas: you need strategy, you need direction and you need cost control today, but you also need to ensure that when something comes out of left field in the business or in the market, your information infrastructure has the agility not only to cope, but potentially to help the business capitalise on it.

Agility

It seems to be retro-marketing day today, but I can’t think of a better way to sum up the single biggest thing IT can offer a business in 2011 than this over-hyped word from the late 90’s IT slang: Agility.
Agility is, of course, the ability to adapt quickly to deal with the unexpected.  We’ve harped on about this for years and it actually turns out to be right!

It’s like football.  All the teams work out, all the teams practice the set moves; all the teams plan their strategy.  But the game-changing moments – and the reason we watch the games – are actually when someone has the agility to capitalise on something that wasn’t in the plan – sometimes theirs, but hopefully more often the opposition’s.

We need that same agility, and if you make agility-limiting decisions today, that will actively hamper the business in the future.  So the issue is a lot wider than “what products will Microsoft release in the future?”  It’s really: “Why should I have Microsoft in my strategy, and how can I position myself to generate the most value from my Microsoft investment?”

So what I hope you’ll get from this post is why we have real, specific value to add over and above most of your other potential partners, and some specifics on taking advantage.

First I want to look at this value going forward.  Then I want to look at this value today; and finally I want to look at what will enable or prevent you deriving this value over the next 2-3 years if you so choose.  That’s my idea of a “roadmap”.

Complexity is the enemy of agility

Because Microsoft’s approach to agility is different to that of most of our competitors, it follows that the flexibility we think we can offer your business is also different.

In a nutshell, Microsoft’s approach is to engineer products to reduce complexity.  I’ll come back and contrast that with the other approach shortly, but I want to be clear that complexity is the enemy of agility.  If you look at the Gartner TCO model, you will see that environmental complexity is one of the biggest operational cost buckets.  If you look at your own environment, you’ll see that the blocker to really moving fast on providing new functionality is probably the number of intersection points with old functionality.

The way complexity creeps in is insidious.  It’s not as if anyone sets out to create a complex environment.  Here’s a pretty realistic historical example of how I see it happening, and its future impact.

You’ve got mail!  In fact, potentially your most critical business application today, according to some analysts – a status which has snuck up on us all rather stealthily over the last few years.

Let’s look at the history.  It was probably about 15 years ago people started demanding E-mail to their desktop computers…and we all know what happened.

Once they got to like it, users wanted it on their laptop around the office.  That meant wireless.  Add complexity.  Then they wanted it on their laptop out of the office.  That meant VPN.  Add more complexity, the start of the security insomnia, and maybe an additional round of passwords for users to remember.  Then they wanted it on ‘holiday’ and they didn’t want to take their laptop at all.  That meant web access.  Add complexity, servers and maybe yet another password for users.  Then they wanted it everywhere and they didn’t want to go to web cafes, which frankly helped you sleep a bit easier from a security perspective.  Until you realised it meant supporting “mobile devices”.  Add servers, devices, helpdesk staff and stress leave.

The point: complexity usually comes from not taking into account the inevitable evolution of business needs at the beginning.  In this example, what started as a pretty simple desktop-to-server scenario is now about delivering information anywhere, anytime.

The worst of it is, the business drivers behind these, and related, scenarios are often pretty solid.  If someone works out in the field inspecting bridges, it makes a lot of sense for that person to enter data from out in the field, not onto a paper pad that somebody then has to enter back at the office, if the paper ever makes it back.

So it’s not as if you can claim desktop connectivity should be enough for anyone!

“The only constant is change”

Heraclitus [hair-ack-leet-us] said that in 540BC, and he might therefore be able to lay claim to being the first person to sum up modern IT architecture challenges.  Perhaps he was actually more geek than Greek.  But it’s true.  People will want more.  And you’ve got two options and two options only, for delivering it.  The first option is point solutions.  The challenge though is what I call “point-solution entropy”, and it works like this:

You’ve got a mail server that is delivering desktop mail just great – but somebody needs a mobile data solution fast.  You buy a “point solution” server and phone combination.  This looks nice and tidy – and it is.  All you’ve got to do is integrate it with the at-desk solution and you have a happy customer.

Then, you need to add the web solution.  Hmnn, OK.  So that web solution now need needs to be integrated with the at-desk solution and the mobile solution.

Which is fine, until you add the VPN, etc. etc.

This happens, as you well know.

And I call it entropy because it doesn’t start off badly.  It starts off actually looking very responsive and tidy, and each bit is “best of breed” as they say.  But the more you need to add – and you will need to add more, because change is the only constant – the more chaotic the overall picture gets, and the harder it is to add or change anything easily.

As it gets more complex, it also gets harder to deliver on outcomes.  The mobile solution doesn’t quite work as expected for appointments changed via a web café.  So… Where’s the fault?

Point-solution architecture has one perceived advantage: it appears to be componentised and therefore appears to offer you choices.

This leads me from my retro phrases to a brand new one…

“Loose coupling”: handle with care

“Loose coupling” Unfortunately, it sounds a lot more fun than it actually is!

Like most architects, I’m absolutely convinced that we get high value from loosely-coupled information architectures – that is, where we deliver information via web services to and from disparate systems – because the better information can flow across those systems, the more valuable it potentially is, and there’s almost no cost associated with making it more available.  There’s no downside to a loosely coupled “information” architecture that I can see.

However, that’s a different proposition to decoupling our “infrastructure” architectures.  I don’t see the same upside there at all, and I see very clear costs in terms of the overall capability and agility we can deliver back to the business if we get into the entropy game.  And again, this comes back to complexity.


Figure 2 – Loosely Coupled System

You can see this contrast in Figure 2.  Information loosely coupled below, infrastructure tightly coupled at the top.  But pictures are less convincing than history, and the industry has already learned and applied this lesson in several areas.

A really good example over time has proved to be ERP.  When I look around, I don’t see people trying to mix and match their standards-based Oracle General Ledger with a standards-based Axapta Inventory system and a standards-based SAP Payroll.  Why?  Because it’s been established that it inherently makes sense to stick with an integrated solution across the ERP stack, from a number of perspectives ranging from cost to performance to supportability.  On the other hand, I do see people using web services to make the information contained in those systems more flexibly available – there’s approximately no downside to doing that.

So, taking as read that complexity is bad, there are (again) two options open to you:

1. Hide it
2. Reduce it

So, option 1 is to hide it.

There are plenty of vendors in the on-going complexity-insulation business, which incidentally is very nice work if you can get it.  You send them on-going checks, and they hide complexity for you by shipping in bodies to make all the necessary pieces work together – usually by adding a few more pieces and thus making it more complex.

The problem is obviously that hiding something doesn’t make it go away, so you still pay for actual complexity, not apparent complexity.  Perception, it turns out, is not reality.  This is why delivery timeframes and check sizes often continue to go up under this approach: complexity is being masked, not addressed.

The other approach is to reduce it at the core.  This is exactly Microsoft’s approach.  We think that the place to deal with complexity is once, in our development cycle, not on-going in our customers’ delivery cycles.  So we invest specifically in enabling you to add capability without adding corresponding complexity.  Our integrated communication solution is one example; there are plenty of others.

You can see that the differences between these two approaches are rather deep philosophically, and they are pretty obvious in practice as well.

We believe that engineering capability into integrated products, with a single architect at the top, will deliver you more benefit in your information infrastructure than having you re-invent the wheel with your own integration.  You can have more confidence in the integration, and so can we.  That’s why one premier support contract will enable you to resolve issues with any and all current Microsoft solution components – and their interactions with each other.

I want to answer your objection before you raise it, which is:  Aren’t you just going to lock me in?

Well, let’s suppose it did “lock you in”, with all its punitive connotations?  What does that mean?  Perhaps 3-5 years before you can do a big refresh and “escape”, worst case?  In that case, are you telling me you’re not “locked in” to your ERP vendor?  Your HR system vendor?  Your outsourcer in some cases?

Lock-in comes down to valuing your options.  If you want to achieve something, you have to commit to a strategy.  We have a strategy.  It’s delivered today, and it has a strong roadmap, not just in terms of shipping products out – but in terms of architectural direction.  It’s a strategy that is differentiated from those of all our competitors and I’m arguing that it is higher-value to you than other approaches for building the agile core infrastructure your business is demanding, because as you add to it you get more capability for less money and with less complexity.

I accept that I’m biased, but that seems to me like a reasonable case for considering a commitment.  And besides, if later you really, really think somebody else’s component works better for you at an infrastructure level, you can swap one of ours out.  Exchange, for example, supports multi-country standards if you really want to leverage them.  So you’re not “locked-in” in the sense of having no options.   But taking a point approach rather than an integrated approach to infrastructure does carry some agility downside along with it.

I hope you can see from this that to maximise the value of Microsoft in your solution delivery strategy, you really need to commit to the small number of products at the solution core.  Although it’s entirely possible to use individual Microsoft products as point solutions, to get the agility we are discussing, there is an integrated solution core, if you like, that returns a disproportionate amount of value.

The reason for this is that one of the ways we drive complexity out of our solutions is to leverage what we’ve already written elsewhere in the stack.

So if it’s not there in your environment, it’s obviously pretty hard for us to leverage it.

The reasoning behind this dependency is also self-evident.  Example: we build all our solutions to use the authentication from Windows Server, because it’s a complex, security-critical area, and you only want to be doing that sort of stuff once.  Ditto with directory services!    How many directories have you got in your organisation, honestly?  Well, in our stack, we’ve got one, and we invest in it continually rather than imitating it in as many places as we can.  That makes sense, because complexity is the enemy of agility, so we want to reduce the instances of complex technologies.  The same complexity arguments that apply to your overall environment also apply within our solution stack.

This needs your strategic consideration.  We don’t have many core products – I count 5 only – but if you are missing one or more of them, then what we can deliver you in terms of agility really drops off fast, because most of our solutions – and those of our partners – leverage that core.


Figure 3 – Microsoft Core Products

Conversely, if you maintain that core (and remember, I’m not saying run everything Microsoft produces – I’m talking about these 5 key products), then your ability to add capability without corresponding complexity is greatly increased – a valuable option.

That, to me, really is a strategic consideration in terms of building for an unknown future.

Here’s a great example.  I bet 10 to 1 that one of the next things your business will want out of the blue is voice over IP, unified messaging, and maybe secure instant messaging and “presence”, if it doesn’t already.

If you’re running our latest core infrastructure – Windows Server, Exchange Server, SQL Server, Windows 7 and Office 2010, we have an integrated option you can add for providing that, available now.

Or consider intellectual property management.  This is a really good example of an environmental change nobody expected.  When we originally designed our mail solution, we didn’t think mail flowing too well would be something that would cause complaints!  But, as we at Microsoft have learned to our cost, confidential E-mail and documents flowing out to people the sender didn’t intend to see them is a growing issue.  Protection of intellectual property is not a new issue, of course, but technology has certainly upped the ease with which leaks can happen.

Enter Information Rights Management – Technology addressing a problem that technology accidentally created and I think a good example for me to use.

Critically, it’s integrated from the user perspective.  Literally one click on the toolbar is all it takes to protect your mail or Office document from forwarding and printing, which is about the level of simplicity you need to get most users to actually use something new.

Under the hood, it becomes apparent why we call this ‘integrated innovation’ – to make this work requires integration across 3 distinct areas in any infrastructure: Productivity suite, Client OS and Server OS.  Using the integrated core, we can build, test and support this complex interaction of components – complexity for us, not you.  And should anything go awry, we support not just the products, but the solution configurations.  One throat to choke, as they say (our support people are bred with strong necks).

And when you remove the marketing hype, that’s what integrated innovation is – solution options that build on, and enhance each other’s capabilities in very practical ways.  So that instead of each addition being a new source of integration complexity, it does what it’s supposed and adds to the value of what went before.

But you do need the core in place to make this easy.  The decisions you make today dictates your ability to respond tomorrow.

And speaking of tomorrow, the roadmap is important too.  Thus far, I’ve focused mostly on our philosophy, what we’ve delivered, and how keeping the core up to date is the key to present and future agility in a Microsoft-based information infrastructure.  I hope this has given you some confidence that this is not a pie-in-the-sky sort of thing and that this approach is delivering value today.  But I want to spend few more lines showing you how this same philosophy of integrated innovation has a lot of mileage left in it.

Although, being a software company, we deliver products; our strategy really rolls up into how Microsoft and our partners can deliver solutions across all key parts of the information lifecycle.

Developers, are the core of IT’s contribution to breakthrough process improvement and competitive advantage, because they take the core and innovate around the edge.  A lot of business advantage in the next few years will depend on how quickly and cost-effectively you can harness developer teams to meet the opportunities that come from left field.

We talked about loosely coupling information, and our investments in this area are all grouped under the idea of “connected systems” which focuses on the design and building of applications to consume information resources.

The business objective here is better leverage of information assets without the cost of adding complex Enterprise Application Integration.

IT Pros have a related challenge in the deployment and operation area of the lifecycle: most are still firefighting but want to move forward on new projects.  The way forward for them is clearly more automation, but they do not have time to build it.

Microsoft’s investment area here is therefore focused on driving efficient operations through better toolsets.  Efficient operations mean what it says.  The business driver here is the one all the accountants love, because unlike most of what we do it fits their simple ROI models – IT cost reduction with increased capability.  Doing more with less!

Investments in this area are around reducing the people costs and error rates induced by manual operations.   As you can see, there’s a lot of potential in this area for further improvement (for your reference, the potential is in purple!).  Automation reduces costs and it also reduces complexity and the scope for errors, so this is key.

Figure 4 – Efficient Operations: Address IT Challenges

The headline here is something we used to call the “Dynamic Systems Initiative”.  That’s worth an entire post on its own, but in essence it’s about re-evaluating the issue of systems’ management.  The scope of this problem is a lot broader than a lot of people think it is.  It’s not just about managing systems intelligently, but about getting to the next level of manageability.  For example, with Visual Studio, Microsoft has made it a lot easier for developers to build inherently manageable, policy-driven applications, even when they’re developed in-house.

The final investment target is the business information worker, who deals with the analysis and action aspects of the information life-cycle.  Our investment here is grouped as “connected productivity”.

This also deserves a post of its own, but this is about making sure all workers get the insight they need, because the information needs of workers across roles is increasing.  That’s right, even your task workers are evolving into more sophisticated information consumers and collaborators – consider the value of instant messaging in a call-center, for just one moment and I think you will understand what I’m trying to say.

This is largely centered about the whole Microsoft Office system, which is much more than the desktop app Office of old – it’s about making sure that we offer integrated capability right down from the device to the desktop to the datacenter.

Enterprise Content Lifecycle – This is about making it simple to author, publish, organize and find content in a managed environment.  Office has always been the preferred way to create content, but we have invested across the solution stack to address aspects of the rest of the lifecycle.

Knowledge Discovery and Insight – Make business data, project information and expertise available to more people, no matter where or in what system it is stored.

Information Worker solutions – Make it easier to provide self-service and electronic forms applications with integrated workflow that leverage familiar Office programs.  A lot of this investment is around better integration between Office and Visual Studio, so developers have more flexibility in delivering applications appropriately to information workers.

Individual Impact – Increase employee self-sufficiency and effectiveness with integrated, easy-to-use tools and modern work products.  Again, integration is key: for example, the stack makes it possible for an end-user, not IT, to place a document securely on a shared site, nominate who can access it and mail them – all from within Word, Excel or PowerPoint.

Integrated Teamwork and Communication – Enable richer communication and more efficient information sharing and tracking to keep coworkers, customers and partners in sync – including simpler workflow integration.

Well, this has been a fairly dense post, hopefully “dense” as in solid, not dense as in … Well, anyway, what you probably noticed missing from this post is the mention of current technological trends (i.e. Cloud, virtualization, SaaS, IaaS, etc.; and how do they fit within your IT strategy and thinking.  That was premeditated.  I wanted to get you thinking about the biggest investment you currently have in IT, which is your on-premise infrastructure and set of solutions.  In future posting, however, I will have lots more to say about these other topics, especially cloud.

Before I drive you completely insane by this long post, I offer you an overview of 5 strategic suggestions to position yourselves with an agile information infrastructure to meet both the current and future needs of your business:

I. Get the core agile and keep it agile
Microsoft and partner products do leverage those core products heavily.  If you have those in places, innovating at the edges is significantly less expensive, complex and time-consuming.  I trust after this you can appreciate the business agility thinking that understands this approach of integrated innovation, and why the core is the key to that agility going forward

II. Take complexity out of your delivery cycle by letting us handle it in our development cycle.

III. Also as discussed.   Complexity creeps in if you don’t keep a wary eye on seemingly smart point solutions.

IV. The value proposition for enhancing information flow by loosely coupling information is very strong, and we have invested and innovated heavily in this whole SOA area.  The value proposition for loosely coupling infrastructure, in my opinion, needs to be examine a bit more – there are significant downsides in that equation if not done properly.  But…that’s the topic for yet another post.

V. This was such a key point, I thought I would make it twice!  If you take nothing else away from this post, I would ask you to consider whether your current infrastructure core is an enabler or a barrier to the innovations the business is asking you for, and whether Microsoft’s approach can help with that alignment between the needs of the business and your capability to deliver.

The decisions you make today dictate your ability to respond in the future!

Cheers!

« Newer Posts

Categories