Thinking Out Loud: CIO Marv Adams

Thinking Out Loud: CIO Marv Adams

Marv Adams signed on as Ford’s CIO two years ago with a mission from CEO Bill Ford, the founder’s grandson, to help save the company from torrents of red ink and a failed Internet commerce strategy. Since then, Adams has begun to help Ford use IT to cut costs and redirect its information technology strategy. In a recent interview with CIO Insight Executive Editor Marcia Stepanek and Detroit auto writer Paul Eisenstein, Adams talked about his tenure, the history of IT in manufacturing and new collaboration technologies that Ford is using and helping to develop for the rest of corporate America. What follows is an edited transcript of Adams’ remarks.

What’s your background, how did you work up to this position?

I was the CIO at Bank One for four years, and I also ran a large processing business for Bank One that did credit card and debit processing for not only Bank One but for a variety of financial institutions around the United States. So I did that for about two and a half years. I ran engineering systems, was vice president of the worldwide engineering systems division at Xerox, and then I had 10 years with IBM starting out as an electrical engineer.

I studied electrical engineering at Michigan State, and then went to work for IBM and did some computer design work, and then transferred to the field, becoming a systems engineer for IBM. I was assigned to the Ford account. I worked in Dearborn with Ford between mid-1985 and 1991, actually designing a lot of the infrastructure, some of which is still here.

You’ve watched Ford from the days when it didn’t have computers in the public relations offices and most offices, all the way through to the point where former Ford CEO Jac Nasser talked about Ford becoming an electronic company.

Yes, my first project at Ford when I was an IBM employee was to work with Ford to replace all of the fixed function terminals with a PC, a PC that could take on the personality of the various terminal types from the variety of systems that Ford had at the time—HP, DEC, Honeywell, various flavors of IBM, etc. We put in the personal computer on local area networks as a universal terminal at Ford. And that’s when it really started spreading throughout all of the different organizations across the company.

Changing Role of Technology

You were there at the point when there was clearly a rethinking about what the role of technology was and how companies like Ford could achieve it.

Ten years ago, let’s say 15 years ago, I think Ford followed a track that was very similar to a lot of large companies which put in technology to solve departmental problems and to drive departmental productivity. As the personal computer, local area networks and wide area networks came into existence, corporate departments, one by one, started connecting up with each other. This started driving a higher level of productivity across boundaries which, before that point, was driven by more manual processes.

By 1986, Ford recognized that it needed to drive an enterprise-wide strategy so it could create integration and synergies that could have the flexibility to structure the company the way it needed to over time, without technology getting in the way. Ford drove standardization and simplification. And that’s the era I was involved in when I worked for IBM. Ford drove common workstation strategies and common local area network strategies. IT drove a common departmental systems infrastructure. That’s when Ford built and rolled out common manufacturing systems, common product development systems and so forth.

Ford went the classic route, from having departmental systems to recognizing, as the company started to interconnect these systems, that Ford had an integration challenge to standardization and commonality. Then, in the mid-’90s, Ford did what a lot of companies did, as the Internet really began to take off. It drifted a bit from commonality and allowed creativity to take off, and tried to take advantage of the promises of the Internet and tried to eliminate a lot of the manual paper processes, leveraging Internet technologies, work flow technologies and so forth.

While some of that paid some nice dividends for the company, much like the PC environment created some integration challenges as it began to roll out, I think the era of dot-com mania created some integration challenges, too. So we’ve gotten a lot of benefits, many of them in isolated areas, and that brings us up to the last couple of years.

Our strategy in IT now, completely in line with the company’s overall strategy, is very much a “Back to Basics” strategy. It’s integration, it’s simplification, it’s consolidation and standardization driving efficiency. It’s reducing variability so that we can improve the quality of all our processes. It’s appropriate integration with our acquired companies. These are some of the themes that we’re focused on in IT at Ford today.

When you took the job a couple of years ago, what was the mission that you came in to accomplish?

There was recognition by [Ford CEO] Bill Ford, by the policy committee, by Nasser at the time that information technology was one of several key competencies that Ford Motor Co. must be the best at for us to build the best cars and trucks. And the mission was to make and sustain the best IT competency in automotive.

After the Dotcom Boom

With regard to Internet strategy, many companies have learned a lot of lessons from the Internet boom days. What has Ford learned?

Our Internet strategy is still evolving. During the hype of the Internet era, the industry hype, IT was in a rush to make everything it did e-business and get the efficiencies that were believed to be out there. I believe that in Phase One of that rush, there was a lot of energy put into front-end systems. But there wasn’t a lot of energy put into the real core processes and the core systems that run the business. Stuff was done out on the edges that actually helped, but there was not as much integration back into the core.

What has happened, as we’ve gone into Phase Two, which is today, is that I don’t think there’s been this massive shift in strategy. What we are doing is more focused on the core business, integrating IT with some of the front-end systems work that we did during Phase One. Now, mind you, we’re not still involved with every single venture that was launched during that era, obviously, but for the most part, we’re on an evolutionary path.

What was the key lesson for Ford from the Internet boom?

Our Back to Basics discipline is all about how we manage the core information of the business. That’s fundamental and it flows through product development, manufacturing, sales and marketing. And you can do a lot of very useful front-end work without touching that, but if you don’t touch that, you don’t get the substantial benefits and integration synergies that you can if you take on that basic part of the business.

Boosting Productivity via IT

Let’s talk for a moment just about the IT side of your cost-cutting strategy. Internally, Ford is talking about using IT to get a 40 percent improvement in IT development productivity. How do you do this?

There’s pretty good documentation around that says if you don’t have standards and if you spend your energy on integration, you are burning a lot more resources on things that aren’t adding or delivering actual business value. Studies have proven that as you standardize, reduce variability and build more commonality into the platforms that you build systems on, you can increase productivity of these systems by at least 40 percent.

You are also insourcing, bringing more IT work inside. You’re hiring 500 IT workers as part of this process, and you say it can save Ford money in the long run. How so? Isn’t that counterintuitive?

Insourcing, simply, is doing things with Ford personnel and taking complete ownership and accountability for the competency and service delivery that we in the past relied on somebody else to do for us. It could be as simple as doing more contract programming, doing more internally and less contact programming. So that would be in small increments. It could be medium-sized, like bringing in some Web-hosting work. It could be major, like taking back new systems development and application maintenance as opposed to outsourcing that to a major IT company. We’ve done all of those.

And one of the things I want to make clear is that this doesn’t mean we don’t partner. We actually believe that 25 percent to 30 percent of our workload should be variable and we ought to have partners who have niche skills and who we can work with as we ramp up and sometimes ramp down the amount of IT work that we’re doing. But we’re coming off a base that’s 70 percent. Seventy percent of our IT work was being done by non-Ford employees, 30 percent was being by Ford employees. We’re shifting that to 70 percent Ford/30 percent non-Ford, but the point is it’s still a healthy amount of partnering.

Sparking Collaboration

Collaboration is the next wave of increased automation at manufacturing companies. How will this play into Ford’s strategy for the future?

I think that what’s occurring today is the vision of 10 years ago—concurrent engineering. Simultaneous engineering is becoming a reality as collaboration tools enable development to occur simultaneously in multiple locations, not only within Ford but with key Ford suppliers and partners, enabled by technology.

The other significant breakthrough that is occurring is the integration between functions of the company so that you can design information and drive it straight through into manufacturing, and you have seamless electronic representation of the product as you move from the design phase into the manufacturing phase. You can also then have seamless information with your purchasing organization as they go out and work with our various partners and suppliers to purchase parts and subassemblies.

You also have integrated information into the finance function so that you can track the cost of your vehicles as you’re going through the program. That integration across functional boundaries so that we can optimize the business system is where I think we’re getting the advantages from technology today, versus the more group or departmental level advantages that we got a decade ago with the technology.

Where to Next?

Well, it’s interesting because one can almost misread the Back to Basics philosophy and think that there would be a retrenchment on technology across the board. But if anything, I would imagine that you would have to move forward. You’re going to have to build more cars off fewer platforms and make them look more distinctive.

I think Ford is moving ahead. It’s focusing on building the best cars and trucks in the world; it’s focusing on Quality Is Job One; it’s putting all of the energies of the corporation into building great products, servicing great products, financing and making that convenient for consumers. And in the IT space, the support that we’re providing to the company around that strategy is creating a very nimble infrastructure that enables the different parts of the business to achieve all of those goals simultaneously.

I was in a session with a number of IT employees this morning, and they were going through an exercise where they wanted to describe the vision of their organization five years from now, and it was an infrastructure group that works in IT. And they teed up a question for me and a number of other directors on the team to answer: What would the headlines be in The Wall Street Journal five years from now? I said, “Ford Has Transformed Itself.” It is building the best cars and trucks in the world. We have the best quality in the world, with the most efficient supply chain, with the most productive part development competency and capability in the industry, with very efficient HR, finance systems that enable the very large global company to operate as a nimble, fast-moving organization.

It wouldn’t say anything in the headlines about IT, and yet when you drill down into every single area that makes us competitive, you would find IT infrastructure enabling what we’re achieving.

DaimlerChrysler is very much in a similar mode. Increasingly, they’re finding ways to pull all the technology pieces of the formerly two separate companies together. You have to be moving ahead at Ford, too, to support a functional return to basics throughout the rest of the company, right?

That’s right. A great example of basics enabling moving ahead, there’s a lot of people now talking about real-time systems. Our view of real-time systems is that you can sense rapidly what’s happening, and you can respond in the appropriate amount of time. And that enables you to be as flexible as you need to be in the different segments of your business.

How do you do that? One way is to build a common language across your data infrastructure and company. You decouple the data from your legacy applications so that you can build a Web services-like technology infrastructure. And almost like Legos, build the kind of business solutions that support the different parts of the company.

To make giant strides and to move forward in a Back to Basics strategy, you’ve got to go back to fundamentals around data management, data design, application design so that you have a simplified, low variable environment to build that kind of robust business capability. That’s some of the kind of stuff we’re doing today in the organization.

The Role of Number-Crunching

So it’s certainly not returning to the days when pure number-crunching power seemed to be the solution to everything.

Although that is in there, too. Pure number-crunching power, you know, today if you go over into the engineering world, you will still find a very large capacity in numerically intensive computing because computer-aided engineering and doing as much virtual prototyping before you do physical prototyping as possible is state-of-the-art engineering. It’s just happening in the background, it’s integrated into the processes of the organization as opposed to 10 years ago, when it was kind of viewed as a silver bullet.

Today, it’s about being able to take information and do a design review with people in different parts of the globe using electronic representation of the data, as opposed to having to fly people around the world to physically look at parts or at a product.

Ford and Real Time

Where is Ford on the so-called “real-time” movement? How is Ford using real-time data now?

Well, real-time systems is certainly an aspiration that we are targeting for over the next five to seven years. Examples of where we might have real-time systems today would be in this mobile asset system, where we’re doing some telematics with fleet vehicles. We have the ability to track the performance of the vehicles—fluid levels, tire pressure, service needs—and then, in real time, set up service appointments so that customers of large fleets of vehicles can manage those fleets in a much more efficient manner. That would be an example of having close to real-time systems. Taking advantage of communications infrastructure, like wireless communications infrastructure, is in place today. Also, we’re taking advantage of micropackages of computational capability that you can embed inexpensively into the actual product. Those are good examples of real-time systems.

I think out on the manufacturing line there’s another example, our Quality Verification System. When a vehicle is moving down a manufacturing line, it goes through a variety of quality checks. And at the end of the line, we’re able to capture all that information and assess whether a vehicle has passed all its tests and is ready to be released out of the plant. It’s a way to aggregate information as a vehicle is being built. That’s an example of helping manufacturing and helping the plants build higher quality vehicles.

We have another system where we’re able to aggregate data of multiple forms—so warranty data, quality system data, problem data, information that we get out of call centers—can be collected, mined, sorted through for quality issues and other information that will help discover defects much earlier in the development cycle, sometimes much earlier in the launch cycle so that we can get problems resolved before we launch the vehicle. When we don’t get problems resolved, hopefully we can find them before too many customers have experienced them, and thus far, we’ve had a lot of good success out of that system.

You have this Quality Verification System in Ford’s Louisville plant, right? A gate that won’t let the car out if it’s been found to have a defect?

We have that in all of our North American plants. But the technology has gone through various stages. I mean, the gate’s been there for a while. It’s gone through various stages of automation. What we have in place today is the ability to actually capture all the real-time information of what’s occurred as the vehicle has been manufactured, and then either pass or fail it and take the appropriate actions based on the applications.

So real time is something that could integrate across the board, all the way through to marketing, knowing pretty much on a real-time basis where the demand is?

I think real time ripples through everything. Take product development. As you have a much more sophisticated nervous system in product development, you understand where a particular part is reused all the way through the different vehicle lines across the company. As you make changes to that part, instead of having manual processes that would sink that information with other programs that have used that particular part, you’ve got a nervous system that allows you to do that in a much more efficient and automated fashion, including synchronizing the suppliers.

In the supply chain, let’s say you have a supplier that’s run into some difficulty providing a particular commodity and this will have ripple effects in your manufacturing build cycle. Having real-time linkages through the supply chain lets you react to changes like that much more efficiently. Some of that is in place today, and the future holds a lot more opportunity for us on the marketing side of the business.

Getting real-time data from our dealers and getting real-time data from our Web sites on what customers are valuing, what’s hot, what isn’t so hot, and in what regions, and being able to adjust your manufacturing capacity through flexible manufacturing to respond to what is selling and selling at a good premium in the marketplace requires near real-time information. That information connects the functions inside the company and across the enterprise, into the larger ecosystem of the company, dealers and suppliers.

Regional Variations in Demand

Are you already using some of your technologies to allow you to recognize, say, differences in regional demand for Ford cars and trucks and then be able to respond quickly with targeted incentives?

Yes, we’re doing that today. And over time it gets a lot more automated, it’s a lot more real time.

So at some point you may literally just have to push a button to see if, say, you’ve got a high inventory of certain cars in a particular location that can be moved with a quick decision on, say, price?

Yes. What we have to do and what we are doing is gradually transforming the competencies of our business to become experts at dealing with information. So people who understand how to mine information, see patterns and then respond to those patterns, and people who can do that who aren’t in the IT organization are using the information that we’ve designed into our business system; those people are in the markets. They’re in the marketing and sales organization. They’re in the manufacturing organization.

They can see the patterns, they have the competencies to mine the data, and then they have the ability to rapidly change various business policies where it makes sense to optimize the business. That’s the gradual transformation taking place, enabled by information.

Using AI for Pattern Recognition

What are you doing with artificial intelligence to help with pattern recognition?

In product development, we are trying to use technology as one of the enablers for building knowledge-based engineering where we are able to build, continuously build on and propagate the collective memory of the organization, and build the knowledge of the organization through common processes, efficient and well-designed information systems. Some nice rules-based AI systems then help users synthesize all of that data and learn from it. And the knowledge-based engineering work is a good one to drill down further into that area.

Another example, one of the challenges that almost any large business has today, is integrating several different disparate forms of data and then capturing intelligence from that data. So today we’re looking at various feeds of data coming out of quality systems, warranty systems, call center data, dealer input, etc., bringing it in and mining it for red flags around quality, around safety. We can use technology to solve a quality problem.

We’ve seen some results already. You’ve got a whole value chain of processes enabled by IT tools and data that enable, for example, design for Six Sigma where you’re designing quality into the vehicle.

The Build-to-Order Grail

We’ve heard in the past five years this term “build-to-order” over and over again. And it’s been called a Holy Grail. But isn’t there a danger it can lead manufacturing companies in directions they don’t need to go? Is Ford pulling back or going forward on build-to-order?

We’re not pulling away from it at all. We are not spending our energies talking about building cars in five days, or 10 or 15 according to precise, individual customer specifications. Instead, we’re focusing our energies on Back to Basics, reducing variability in that very complex process, trying to get to the point where we can promise a customer a delivery date and deliver it on time, each time. If we can do just that, we will have made huge strides in order fulfillment.

As you get process control in any complex process, you can then start working on reducing the waste and variability and the delays and the process and go from delivery cycles that are 40, 50, 60 days long and start bringing those down to something that’s a lot more efficient, a lot faster. We’re not going to quote specific numbers that we’re targeting, but that’s a general approach we’re using.

And to do that you’re focusing on all the core systems, all the core processes, and you aren’t sending out messages that this very complex problem can be solved by some Internet veneer you can throw in front of it . You’re focusing on your core business using the great capabilities of Internet technologies and other technologies.

Ford is also developing new software that will help its plants switch production from, say, one kind of engine transmission to another in hours rather than in months, as previously. Is this part of Ford’s real-time strategy?

In general, moving to flexible manufacturing is something that our business is very focused on, and obviously that doesn’t occur without a large amount of IT infrastructure enabling it, a lot of process infrastructure, a lot of tooling infrastructure, there’s also a lot of information infrastructure. This is one example of that.

We also are putting a lot of emphasis on collaboration. On the simple side of things, we have a very robust videoconferencing capability throughout the company validate this number, but it’s somewhere in the 500-plus sites we have. And we drive the capacity of that as we’re trying to get people to do more electronic collaboration. We can’t replace all the physical needs to get together. But it’s at all levels of the company.

Every Monday morning we have operating committee meetings that go forward, and the COO participates in that link together with our worldwide leadership team via videoconference. And it works its way through the operating committees of all the global functions of the company. So we take advantage of that day, and we’re able to respond. Especially in times like these, where we’re looking for every dollar, we can drive even more electronic meetings via the videoconferencing infrastructure.

That’s pretty good, and it’ll be a lot better as the technology matures here during the next two or three years. We plan to move to an IP-based videoconferencing infrastructure that leverages the World Wide Web. The quality is much greater. As you get the quality of videoconferencing up, and as you can distribute to endpoints as granular as somebody’s office or maybe even their laptop that they’re traveling with, it really becomes a lot more effective. And that’s the kind of infrastructure we’ve moving to.

The second area of collaboration is technology that allows people to have a place to store their workgroup information and have meetings where they can see the presentations simultaneously. And they can with good document management rigor, they can work as a group very effectively and have a repository for information that just goes way beyond e-mail systems.

That technology, groupware technology that links together workflow systems, group document management systems, and then tightly couples in with some of the key information systems, including our design systems, enables groups to be a lot more productive and it reduces travel. Not only travel across the seas but inefficient travel even within locations like [Ford headquarters in] Dearborn, Mich., where you’re getting in the car and going from building A to building B.

What’s the Progress?

Looking back on the past 18 months since your strategy began, what progress can you point to?

Obviously, building the best cars and trucks—Quality Is Job One again at Ford. We use standardization, simplification. In a year of Back to Basics, we’re making progress. Our market share rose in September. Our quality warranty costs are down. Our recalls are down. Our customer satisfaction is up over the last year, substantially. And we’ve driven a lot of waste out as a result of Six Sigma rigor that’s beginning to get great traction in the company. In the last 18 months, we’ve also taken about a quarter of a billion dollars of cost out just on IT efficiency, and over the next two years, we’re going to deliver another $250 million, which is a half a billion in total over that, what, three-and-a-half year time period. We’re also going to go after about a 40 percent development, IT development productivity as a result of getting back to IT basics. Let me sort of highlight some of those areas.

We’ve got a project called Project Edison where we are driving standardization of IT infrastructure. So instead of having eight different UNIX platforms and five different release levels of Oracle operating systems and 10 different release levels of different Windows and Intel environments that you have to wrap unique systems management software around, you have to dilute your skills across them differently. You have to set up variances in your operational processes.

We are consolidating, we’re standardizing the infrastructure, we’re then able to consolidate this multitude of servers onto larger, more standard servers. And this is true in the processor space, it’s true in the storage space, and, as a result, we’re driving the utilization of the assets up from the 20 percent to 30 percent utilization range in the UNIX world, in the Intel world, and in the storage world up to the utilization that we enjoy in the mainframes, which is in the 90 percent range. That has meant enormous cost savings. It also benefits us in terms of buying power because you’re leveraging purchases with fewer partners.

We’re making some great progress on Project Edison, and we’ve got a plan over the next two years that takes us a lot further. That’s pretty exciting.

Now, pick your IT research organization—Gartner, Meta, Forrester—that has studied IT organizations and documented the fact that 40 percent of their development resources are focused on integrating disparate systems. So you’re not adding any value when you do that; you’re simply integrating system A with system B. And the larger the company and the more disparate the standards are, the more you hit that 40-plus percent.

Our Project Edison is driving reduced variability in the infrastructure, enabling us to get out of as much of the integration game and either bring that savings to the bottom line or move it to enhanced capacity to solve business problems. So that’s a big area of focus for us.

Another thing we’ve done, and this is all consistent with our IT Back to Basics, we started out 18 months ago with 30 percent of the resources in IT at Ford as Ford employees and 70 percent sprinkled across approximately 200 different suppliers, and we are aggressive in moving that to 70 percent Ford and 30 percent non-Ford, leveraged across about eight different partners.

Our buying leverage, because we were spread across so many different companies, created inefficiencies. As we bring people into Ford, our cost structure is going down at the same time we’re building technology muscle into the company. IT is viewed at Ford as a critical core competency, a competency that along with all the traditional automotive competencies is essential for us to compete and be world class. Bill Ford is a champion around having a strong information competency in the company, and he and the entire policy committee have been very strong sponsors of building this muscle back into the organization instead of having it leveraged across as many companies as we did before. We call that Project Renaissance.

The other thing we’ve done is implement in IT the same kind of delivery process rigor that we implement in designing, manufacturing and launching our vehicles. So it’s a staged process where you design quality into the very earliest stages and have a process rigor in how you roll out those solutions. We call it Rigorous Execution.

So it’s a process, it’s chunking work into smaller pieces, so we aren’t doing these big massive multiyear projects. But we’re trying to chunk them into six-month or one-year chunks where we can deliver business value in an architected way, and then deliver another chunk the next year.

Think about driving reuse and components across everything from high-performance sports cars to electric cars to minivans. There’s some number of components you can reuse across the spectrum, there’s some that have to be unique. Right? The buildsheet specifies the components. Some components are unique, some are shared across vehicle lines, some are shared across classes.

IT is no different. Let’s say if IT were a truck versus a high-performance sports car, our IT equivalent of a work-horse truck would be a large-scale transactional system that has to deliver high volumes of data day in and day out. An IT sports car equivalent would be a collaborative environment that allows a work team to come together very rapidly, have an information-sharing environment and be able to do that across time zones.

There are different IT components that you can share to make that a reality, and if you standardize those components, you can drive significant reuse across the different environments. And we call this our patterns work. We believe it’s analogous to re-use within components within automotive, and we think it’s a key part of driving our productivity in terms of delivering business value.

Global Integration

Bringing this all into place globally, especially with Ford partner Mazda, could be incredibly frustrating and a huge challenge. You’re dealing with international communication systems, multiple backbones. How do you take this and integrate this on a global scale?

Anytime you integrate two companies, it’s extremely challenging. It’s culturally challenging, and it’s technically challenging. But I think it’s getting easier. On the technology side, I think it’s getting easier as a result of a lot more robust standards that have emerged over the last five or six years that enable you to bring the infrastructure together a lot easier than you could certainly a decade ago.

So you do it in a methodical way. You start out with those things that don’t differentiate companies—again, I’m talking from an IT perspective. You ask, can you aggregate your processing infrastructure, can you aggregate your networking infrastructure, getting people on common IP address infrastructure, global directories, integrating e-mail systems. We’re not going to get into debates about whether e-mail system A is better than e-mail system B. It’s a utility for the company, and it’s one of the ways we communicate efficiently. So we’re going to be on the same baseline, and we are in our acquired companies.

You look at integrating infrastructure, like I mentioned before, like videoconferencing infrastructure. You work your way up the IT stack, if you will, until you get into the areas that are tied to specific business processes. So, for instance, before I get to that, as you continue to work up the stack, you get into finance systems, you can aggregate finance-HR systems, so you can aggregate all of your people information worldwide. That stuff, while not trivial, is relatively straightforward and drives quite a bit of synergy as a result of it.

Then you hit the harder stuff, like design systems, CAD systems and the like. I really believe that IT’s role is to make it very clear what the possibilities are, but also to take the lead from the business on where the business wants to drive for high degrees of integration and high degrees of global synergy. Those decisions tends to get driven by the business, and then IT puts the team in place with the business to actually pull it off. So we are actually doing that today in our whole product development system to get common across North America, Volvo, Europe, Mazda, etc. But it’s not IT driving common systems for IT sake. It’s doing it as the business decides that it wants to drive some global car programs or wants to drive reuse of components across programs, and as the business decides they want to have common process methods and tools to enable that.

Will Ford ever be at a point where one of your engineers can get reassigned from Dearborn to Hiroshima, for example, and comfortably just sit down at a Mazda terminal and be essentially operating on the same sort of hardware, same software and so forth?

As you look let’s say three to five years out, that is a very real possibility that will be in place. Today, and as you go out in time, you’ll find different places around the globe having the exact same infrastructure, but it tends to move with product programs; you know, once you start a program, you don’t want to swipe out all the infrastructure. It tends to move with product programs. But in the three-to-five-year time frame, you will have a very, very similar process, methods and tools infrastructure.

One of your bigger challenges I imagine will be to bring suppliers on board. It’s not just the fact that they may not want to, but they may have cost issues and, in many cases, they may be supplying other automakers, or 50 automakers, who are all saying ‘We want you to use our system. No, we want you to use our system.’ So I would imagine there are some places where making the transition to commonalty may be impossible, and the best you can hope for is finding systems that are friendly and easy to translate.

Yes. Obviously, as we select our standards, we do it with a lot of research on industry trends, on install base of technology across the world and across the supplier community. We look at where we can create the appropriate degree of integration through standards that don’t require systems to be common. We might drive common systems in the side of the company, but as we work with suppliers, we might be able to do an awful lot of integration through just having some standards that enable supplier A to use a particular system, allow us to use a different system, and yet have the degree of information sharing that we need.

Ford and Security

Two years ago you talked rather aggressively about bringing key suppliers inside the kimono, if you will, to start sharing data. You’re going beyond sharing it with Magma, and you’re going beyond sharing it with Visteon Corp. You’re talking about small folks who may have only the most primitive technology systems, and it raises all sorts of issues of communications, also security. Can you address that?

I think depending on the partner we’re working with and their degree of technology sophistication, you know, we have to sort of dial in the right level of technology to support their needs. So when you look at something as large as Visteon, we would have various sophisticated and integrated systems infrastructures. They would have all the appropriate security and information security virus protection, disaster recovery kind of capabilities built in, especially for that large a relationship. As you get into the smaller environments, again, it’s actually getting easier than it was a decade ago or even five years ago.

The emergence of standards like XML have literally made it pretty straightforward to link Microsoft Office, for example, which almost any size company has, with large scale transaction systems. You put the obvious appropriate middleware in place to provide all the information security and protection, but you can through XML standards link a spreadsheet planning system of a small supplier with a material release system in a company the size of Ford. We’ve actually done some of that with small suppliers

Recently we saw an attack on the Internet that took down more than half the servers in this country. It’s got to be one of the things that you wake up from occasionally in a sweat.

Security is one of the top priorities that I focus on. We look at the full topic of business risk from operational processes that are critically important to the company, processes ranging from supply chain integration to financial close, for example. We look at business continuity, which is how do you deal with a variety of risk scenarios and how would you get the business back up and running. So you invest in the right level of insurance protection. I’m using the word insurance generically, and you also look at business continuity capability, depending on the importance of that area of the business.

Another component of the risk framework is disaster recovery, which is a more classic IT capability, again, dialing in the amount of robustness required for that area of the system so that you’re doing it in a business efficient way. So, for example, some of your online transaction systems that you need to operate the business every day do hot mirroring, where you literally duplicate a transaction in another location so that if a site went down, you could just continue to operate real time.

Other types of information systems only require recovery over a few hours. It’s much less expensive. You can back up information in traditional methods, take it offsite, have it there, do that on a daily basis and be able to bring your systems back up within a few hours if you had a particular disaster that required that.

Other environments can be done much less expensively because of the criticality of the business, so that’s an area of focus of the risk management. Information security is a big deal to us, so what we invest in authentication, how we focus on our wireless infrastructure, really focusing on more robust directory systems so that as you do work with a partner, you have the people that are authorized and what they’re authorized to use. So, again, you can dial in the level of access that is appropriate for the type of business relationship and for the specific role within the company.

Since 9/11, is Ford revising plans to put certain things on the Internet as opposed to proprietary communication systems?

We have continued to raise the amount that we invest in business risk, and we’ve continued to develop different kinds of systems and process infrastructure to deal with the ever-increasing variety of risks that we face in today’s world. And, again, we don’t talk about all the specifics of those. But the answer is we spend more money, we have more in-depth competencies in-house and we have more significant business partnerships with companies that work in this space.

We are in very close contact with not only the major software providers, so we’re in sync with the latest releases of software to keep them as virus-protected as possible. We’re also in sync with various organizations like CERT as well as different government organizations to understand the type of threats that are out there. We also do a lot internally to test our own infrastructure to find out own vulnerabilities.

The thing that would scare anyone might be that Israeli company report that pointed out all the new weaknesses of Internet Explorer, that you could go from outside and essentially take over the computer. It was quite a surprise. Microsoft flipped out.

This is a great example of why Back to Basics is not going backwards. It’s actually enabling going forward If you design systems so that they’re vulnerable to any one vulnerability, vulnerability in the browser is the one you just mentioned, if you do that, you are setting yourself up for high risk.

You have to design your systems to understand different kinds of vulnerabilities and have checkpoints all the way through your infrastructure and into your systems that will enable you to be—what’s the word used in engineering? Robust engineering is when your product performs as designed under a variety of harsh operating conditions.

A robust systems design is one that operates under a variety of harsh and perhaps even threatening operating conditions. That requires a competency. And as you look to hundreds of different companies to provide solutions, it’s just harder to control it. And while we partner a lot with technology companies, we do it in a more controlled manner as a result of increasing business risks.

Copyright © 2004 Ziff Davis Media Inc. All Rights Reserved. Originally appearing in CIO Insight.