Your article on IT disasters (‘When IT all goes wrong’, Information Age May 2005) highlights a major cause for concern among IT directors. In this day and age, whether you are monitoring financial data, keeping track of your customers or managing your business, it is imperative that potential IT crises are avoided altogether. The consequences could be more catastrophic than ever before.
IT directors need greater insight into the complex interactions that occur within their networked IT, enabling them to be confident that their infrastructure is optimised to support their business priorities. They need to assess their existing infrastructure in order to see what applications are in use and to identify key upgrade requirements before it lets them down.
Many organisations still suffer from a lack of IT skills and expertise, which results in poor performance levels. Research has shown that this poor application performance can severely impact the bottom-line, and divert resources from exploiting new business opportunities. It can also be detrimental to the functionality of the organisation, as highlighted in your article, when lack of understanding can lead to a massive system shutdown. Both expertise and the implementation of processes can minimise such potential problems. More than ever, businesses need a better understanding of the relationship between applications and the infrastructure they run over. This need for organisations to implement preventative measures will actively assure that IT infrastructure will not fail.
Once performance has been optimised, it is important that the network is monitored and managed effectively in order to identify any potential problems and rectify them quickly. By addressing the potential problem before it arises, organisations can focus on their core business and relax in the knowledge that their worst IT nightmare isn’t just around the corner.
Martin England General manager of Applications Assured Infrastructure portfolio BT Global Services
Gareth’s Morgan’s piece in the June issue, ‘Locked out’, addressed many good points on the effects of Internet attacks like distributed denial of service (DDoS) events and worms. At Arbor Networks, we recently completed some research which had been carried out over a two year period into ongoing DDoS attacks on the Internet. The results showed that the duration of DDoS events range from a few minutes to several hours. However, in many instances, the research indicated multiple, short-lived DDoS events launched against a single site in quick succession, in fact, caused a larger, cumulative attack.
Attacks against the most frequently hit sites lasted in these cases from weeks to months. Furthermore, these attacks are targeted not only at the obvious networks and hosts but at many other global hosts to give the impression that such attacks can occur seemingly at random. There are at present more DDoS perpetrators and more worm authors actively deploying malware via automatic propagation techniques than ever before. There are also more widely deployed defences available at the Internet service provider [ISP] level as well as on enterprise and consumer networks and endpoints. This vigilance is yielding results – malware authors do get caught and convicted with more frequency now than before – a trend that will hopefully continue.
Finally, the single most effective measure a company can take is to put a defence strategy with their upstream providers into operation. When an attack hits, time is of the essence – this is not the moment to determine what is normal traffic and what can be safely discarded. Establishing relationships early and working with your providers to understand what measures are available to you and how you can use them to protect your network and your online business is absolutely critical. Some of the most forward-looking ISPs and telecommunications companies are now beginning to offer specific DDoS attack prevention services, a trend that will continue to grow to thwart these attacks.
Jose Nazario Senior security engineer Arbor Networks
I read with great interest the story about distributed denial of service [DDoS] attacks, ‘Locked Out’ (Information Age, June 2005).While there has been a lot of hype around DDoS attacks targeting online gambling businesses, we are now starting to see a new trend where the DDoS attack aims to bring the organisation to a standstill by bombarding it with a massive amount of email. Businesses need to make sure they are protected against this and not make the mistake of thinking DDoS attacks can only impact websites.
Some of the responsibility of protecting businesses needs to fall on the shoulders of the Internet service providers (ISPs) who are both providing a service to their customers but are also allowing their customers who are spamming to continue to do so.
Today’s technology is sophisticated enough to filter out spam and viruses at the Internet level, stopping them from ever reaching an end user’s inbox and thus preventing a business from feeling the effect of an email DDoS attack. So it’s outrageous that so many ISPs are not harnessing this technology to protect their customers.
Mark Herbert Founder and director IntY Ltd
As your May Crib Sheet on ‘Information quality tools’ highlighted, data quality directly underpins decision-making effectiveness, and thus business performance overall. What is often overlooked, however, is that there are two dimensions to the data quality battle: within and between enterprises.
Although 80% of the effort required to get data in order must be spent within the enterprise, initiatives will be of limited impact without an effective way of keeping data accurate, synchronised and consistent when it travels across value chains, in the spaces between enterprises.
Ecommerce has cut costs, improved service and reduced lead times for many organisations. However, the efficiencies ecommerce unlocks fall short of their full potential if this kind of synchronisation is not achieved.
Synchronising data systems is a Herculean task. In data-rich environments, such as retail supply chains, where organisations across one end to the other of a value chain are required to share common item information, this requires a highly developed, globally accepted, detailed set of standards for defining the data that describes products, services, commercial terms and parties operating in supply chains.
Over the past five years the global Retail/CPG industry has developed a framework for doing exactly this – Global Data Synchronisation Network. The network is now operational, and in use by a small but growing number of leading retailers and manufacturers. And a number of authoritative research studies have reported the far-reaching benefits of global data synchronisation in Retail/CPG.
The groundwork achieved by the Retail/CPG industry should pave the way for other industries to follow suit. But businesses can’t expect an overnight revolution – the process is complex and slow to deploy successfully.
In now complex business ecosystems, where organisations have become increasingly interdependent, the quality and reliability of data exchanged is key to economic success. Therefore, the new battlefield for growth lies outside the four walls of the enterprise, as organisations embrace multi-enterprise collaboration to achieve new levels of growth, profitability and sustainable competitive advantage. Data synchronisation will be a critical component of successful multi-enterprise collaboration, and as such, depends as much on the effectiveness of processes between enterprises, as well as within them.
Spencer Marlow Retail/CPG solutions manager Sterling Commerce
Trojans at the gate
The mid-June Trojan horse attacks on UK government and financial institutions demonstrate that, yet again, organisations have been caught out, and have had to backtrack on their security management to compensate. It’s further evidence that the large majority of UK businesses are still taking a reactive approach to security and rely upon updating patches and firewalls to combat attacks.
If they continue to rely on fire fighting vulnerabilities, they will ultimately be left open to further attacks. Rather than continue to address each vulnerable area in isolation, UK organisations need to move to a simple and ongoing intelligent process to flag current and emerging vulnerabilities, respond to these threats and decrease overall exposure time.
Ulrich Weigel Chief security strategist EMEA NetIQ
Supply chain reaction
I read with interest Tim Bradshaw’s recent article about improving visibility within the supply chain, (‘Transparent dealings’, Information Age June 2005). Whilst IT systems need to be effective, to create the visibility required, the data utilised must be too. Managing goods that are manufactured and ultimately delivered needs more than a view of historical reports if we are truly going to have an impact on productivity and operational efficiency moving forward.
Organisations lack the completeness of information required to truly understand the minute-by-minute performance within the supply chain. Many have traditionally struggled to attain this information since it resides across a number of platforms from different suppliers.
Yet by using real-time technology to combine this information from multiple sources and, critically, presenting it in a usable format to those who can utilise it to immediately effect operations, organisations can attain a real-time perspective and unprecedented visibility over the supply chain operation for the first time. This is a perspective that will support tactical decision making to identify problems while they are still manageable – spotlighting your star performers and highlighting operational bottlenecks.
Cost reduction and productivity gains are available to those who harness real-time operational data. Throwing your spotlight on your supply chain can reap unexpected rewards.
Nelson Smelker Managing director Symon Communications
When worlds collide
Your article ‘Convergence or collision’ (Information Age May 2005) highlights the many financial challenges posed by the move towards [IT and communications] convergence. Yet while certain hurdles remain, the incentives for achieving convergence are so great that the industry must work to ensure they are overcome.
Technological developments have blurred the fixed/mobile contest, so that now a number of industry players have a major stake in convergence. Mobile operators want to reduce capital or operational expenditure and create economies of scale; fixed-line service providers want to regain customers by bundling services; cable companies want to add mobility to complete the quadruple play they have three-quarters achieved with video, voice over IP (VoIP) and cable modem data.
Meanwhile, enterprises recognise that fixed-mobile convergence will have an enormous impact on their businesses. With the convergence of VoIP with mobile voice services, the next frontier of operational efficiencies will have been reached. At the same time there will be seamless interworking across enterprise and carrier network boundaries providing improved roaming ability and better quality of service. If they implement successfully, mobile operators will be able to attract more customers and increase revenue.
Ten years ago the technological breakthroughs that have propelled us towards fixed-mobile convergence seemed impossible. Now, as the industry stands on the brink of achieving this goal, there is every incentive for us to work to overcome these final obstacles to realise benefits for all.
Wolrad Claudy Managing director EMEA Tekelec
COPYRIGHT 2005 Information Age Media Ltd.
COPYRIGHT 2008 Gale, Cengage Learning