The building of any system is a cumbersome and monumental task. This is especially true of enterprise model systems. This type of system is indeed very expensive and covers many aspects. Concerns such as continuance, security, developmental choices, and the flexibility of the system are all things that must be addressed in the planning, construction, and maintenance of any such system. These systems, although very beneficial to their organizations, require detailed design, quality assembling, and constant monitoring for both efficiency and maintenance.
The enterprise model system is a gargantuan and for the most part non-physical system, which is designed to provide flexibility and access over a large area that most likely extends into the global stage. Notre Dame’s Dictionary of IE Technology defines an Enterprise Data Model as, “A consistent collection of data structures expressing the data needs of the organization. This schema is a comprehensive, base level, and logical description of the environment in which an organization exists, free of physical structure and application system considerations.” (Notre Dame section E)
Although this design type is ingenious for its flexibility and accessibility, the structure required to maintain the design often soon becomes greatly cumbersome, bordering on unwieldy. As such, great considerations and planning must be taken before hand to ensure that the system can be utilized and maintained within a convenient structure. This structure, once in place, becomes a mere permanent fixture, as the system is too large and at that point entrenched to easily remove or replace.
Because of this large and non-removable nature, the enterprise model system quickly becomes, what is known as, a legacy system. Many efforts have been undertaken, in order to concretely define that which is a legacy system. The general consensus, from among the computing world, as to what a legacy system or application entails is the following: A system or application that has grown so large and becomes so critical to the environment that it was placed, that despite its age and degree of obsoleteness, it becomes too costly and too inefficient to replace that system or application. (WEBNOX para. 1)
Legacy Applications and Systems
A neglected or otherwise ignored legacy component can eventually corner an organization into a time of crisis. A great and well-known example of a legacy application is COBAL. For the most part, any system that still uses COBAL is too highly dependent on that application to upgrade their systems to a more competitive application, such as a C based language. This unwillingness to replace the legacy system caused the massive panic that is now known as the Y2K bug. The fear caused by the imagined repercussions of this Y2K bug led to a massive recruitment drive for COBAL programmers, in order to upgrade long neglected legacy applications and systems.
At the other extreme however, a well maintained legacy system could serve as a faithful and withstanding part of an organization. Another illustration of the “double-edged” nature of legacy systems is the lodging industry. In effect, most of the systems used by hotels and motels are extremely outdated but still functional. This long-standing functionality affords the owners of these systems a great cost advantage, in the fact that these systems will seemingly never need to be replaced or overhauled. The disadvantage however, lies in the fact that in order to remain competitive these systems and their applications will have to undergo constant upgrading and maintenance, which in the long run could indeed be comparable with the costs of replacement and overhaul which they had previously avoided. With this in mind, prudent organizations should carefully weigh the monetary and productivity costs of continuance versus replacement. (Hospitality para. 6)
There are a great number of repercussions that come, when a decision is made, to replace a legacy system. CIO Insight’s Whiteboard article “Pulling the Plug on a Legacy System”, had this to say about putting an end to such a system: Keep, migrate, redesign or shut down? Deciding what to do with legacy systems is one of the most common decisions a CIO faces. Many scenarios can lead a CIO and his or her team to decide to retire an application or make a fundamental change to it: changes in corporate strategy or business processes; a merger, acquisition or business closure; government mandates; the lure of a promising new technology; or the demise of a vendor or product line. But if the wrong migration option is selected, a company can be burdened with unnecessary expenses, lost data, or a system that is a white elephant. Even worse, a company’s business strategy or a fundamental business process can go begging for adequate technical support. (Ulrich para. 1)
It goes almost without saying, that the monumental collapse of systems, which is foreshadowed by the article is something that would be very detrimental and would in terms of overall cost be more beneficial if the organization were to avoid it. Unfortunately there is only one option that will allow the legacy system to stay alive as well as competitive. This option is to upgrade. Upgrading is a simplistic process, yet is one that creates dependency. In order for the system to stay up to date, it must be continuously fed the newest upgrades, which likens it to a vehicle’s consumption of gasoline.
The Optimize Online Magazine article “Tuning Up Legacy Systems” highlights three areas, which can be used to increase the effectiveness of a legacy system. The first two of these aspects are maintenance and modernization. Maintenance it states is “an incremental and iterative process in which small changes are made to a system. These are often bug fixes or small functional enhancements that don’t involve major structural changes.” This puts the least amount of strain on both programmers and the system itself, which is why it is the most recommended of the three changes. The second aspect is modernization, which the article defines as, “more extensive changes that still retain a significant portion of the existing system. These changes may include restructuring the system, enhancing functionality, or modifying software attributes.” This aspect, although more burdensome on the system and its personnel, is often used in conjunction with maintenance in order to create a doubly affective and working revitalization for the outdated system.
The third aspect, which the article is clearly against, going so far as to state that it should only be used in emergency circumstances, is replacement. This aspect, as the name implies, is a completely new system, which is put in place of the old one. This replacement is time consuming, cumbersome, and anything but seamless. A replacement in fact can lead to an entire collapse of the organizations IT infrastructure. Even though this collapse may only be a temporary setback, it can still lead to a great loss of productivity and overall business for the organization. (Seacord para. 6-8) It is made clear by multiple opinions from industry analyst leaders, that when dealing with a legacy system, the main concerns are first keeping the systems competitively effective and up to date. Secondly, minor revitalizations to the system are greatly preferred over system replacement. Lastly, due to the fact that replacement is such a dire and undesired option, the organization must take great pains in selecting or otherwise developing any system that could possibly evolve into a legacy system, such that the system will continue to meet their business needs, far into the future.
Once the system architecture is established and stable, the most prominent concern becomes keeping that system secure. Whether the system is a static internal structure of the organization or a dynamic entity that is meant to perform within the forum of the Internet, the basics of system security remain the same. A well-planned dichotomy between security and function is crucial to the livelihood of the system. If the balance is too biased toward a given direction, then the system will topple over.
In general a quintessential aspect of network security is a stable, if in the least only barely so, balance between access permission and access restriction. If the networks security access levels are too lenient, then unauthorized usages of the network can be made, in order to acquire or tamper with that which is contained within the network in a manner, which was not authorized by the network controllers. Conversely, if network security access is too stringent, then users that are properly authorized may inadvertently be unable to gain the necessary access levels to utilize the network as intended.
The list of needed security features in today’s insecure technology based world could make up lists that would fill up several entire volumes of texts, simply listing the concerns much less going into detail on each one. No structure, be it physical or non-corporeal, can be made entirely secure. Even if all precautions are taken at the time of construction, new advances, progress, and even decay due to the simple passage of time will eventually reveal a weakness or several weaknesses within the structure.
Due to the vast similarities between that which is required for network security and the requirements of a database security system, the following quote also fits well when applied to network security. The GovernmentSecurity.org Network Security Resources’ database contains an article titled, “Database Security” that lists three items that seem to be a general consensus when it comes to making a system secure. It states the following: “1. Server security — ensuring security relating to the actual data or private HTML files stored on the server 2. User-authentication security — ensuring login security that prevents unauthorized access to information 3. Session security — ensuring that data is not intercepted as it is broadcast over the Internet or Intranet “. (Rahmel para. 4)
In order to maintain the contents of the network itself, a network administrator has several options open. The first is the archival method, in which the administrator takes pains to routinely create backup copies of the entire server content. This method is crude, bulky, time consuming, and cumbersome. Second is the use of an efficient deletion rule. With the use of a well planned out version of this rule, the network administrator can maintain a high degree of control of the editing level allowed to users with respect to the network contents. Because this rule is applied specifically to the network as a whole instead of being applied to the users, it provides a degree of restriction-based security without restricting user access. A third option is to make user access read only. This option is the most secure, however it prevents users from being able to manipulate data content within the network, and therefore severely limits the network’s functionality.
The first and easiest option in this aspect of network security is to require that the users change their login passwords on a regular basis. Although this process is frustrating for the users individually, it places very little on the network’s integrity itself and insures that intruders cannot discern frequently used passwords. A second option is to deny access once an incorrect password to a specific account has been used a specified number of times without inputting the proper password. This option prevents several common hacking tricks, but is extremely cumbersome in that without the availability of the administrator, a user whose account has been frozen by this rule cannot gain access to the network in a timely manner.
The first and easiest option with regard to session security is to encrypt the data. This option itself is relatively simple and painless. The only drawbacks to using this option are the cost to purchase or make the encryption software and the additional time required to transmit the information. A second option is to employ additional software that provides security for the user’s session. An example of this software is VPN. It is a program which sets up a direct secure connection between the user and the network. The only drawbacks with this software are the same as with the encryption software in that, they incur costs and increase lag time. A third option is to personally monitor and police individual session activity. This option is the most precise, however it requires constant personal attention by the network administrator and therefore is not a very efficient option.
External Exposure, Risk, and Security Options
The previously stated options are the basics of any system security plan and provide the cornerstone upon which the anti-intrusion structure is built. There are however further measures to be taken, as this cornerstone will not effectively eliminate all possible intrusions from the Internet. The reason for this is because a system that functions on flexibility and ease of access (the enterprise model system) is greatly dependent upon Internet based tools. These tools include both Internet browsing applications, as well as communications software such as email, Instant Messaging applications, and file transfer programs.
The Internet Security System’s Research database contains an article entitled, “Risk Exposure Through Instant Messaging And Peer-To-Peer (P2P) Networks”. This article highlights Instant Messaging services and several popular file acquisition networks, as primary targets for Malware and other malicious intrusions. Furthermore, the reason behind the creation of Microsoft Windows XP Service Pack 2 was the realization of an exploited security exception within the operating system’s programming. This exception was utilized via the Microsoft Internet Explorer Active-X controls programming. Being that Active-X code is a crucial integrated part of Microsoft’s web browser, this exploit could not be effectively removed by simple application patching. Due to the severity of this exploit, Microsoft created Windows XP Service Pack 2 and used its automated update software to implement the changes to the operating systems of its consumers. This operating system upgrade comes complete with a (sometimes annoyingly so) near impregnable aggressive firewall, as well as a new security-monitoring center. (Microsoft para. 2-7)
Another software security option, which comes with a similarly aggressive firewall and security center, is Norton Internet Security 2004. This security application is also a prominent choice for filtering out unwanted access, by either unknown applications or unknown computers. Similar to the Microsoft Service Pack 2, it contains a feature, which requires the computer user to manually create, or allow application configured access stipulations for any and all programs on the user’s computer, which use features that require access to the Internet. (Symantec)
Security System Conflicts
Choosing either one or both of these filtering applications can greatly increase the security of one’s system and can prevent unwanted or otherwise unauthorized access through the Internet. These applications, as mentioned, require extensive amounts of manual configuration to ensure optimum system productivity. The cumbersome aspects of these filtering applications are further compounded by the fact, that even after manual configuration settings are in place, the filters may still cause conflicts with applications that are needed for system productivity. This is especially true if both of these security applications are in place on the same system.
Conflicts such as these are precisely why a functional balance between access filtration and access permission is needed. It is true that any permission for access that is granted by the system opens up an entirely new exposure risk. It is also true however that if the level of system security interferes with the level of system productivity, then the security becomes more of a hindrance to the system then a benefit. One final way to insure system security within the venue of the Internet is to have the system designed with such security in mind. If the system is developed in such a way that the system itself inherently manifests its own security features, and at the same time provides for its own optimized functionality, then there will be less reason to burden the system by making use of additional security software that might restrict the system’s functionality or the functionality of its support applications.
One of the major trends in today’s IT world is outsourcing. This is highly due to the cost-benefit ratio provided by outsourcing. The majority of the non-domestic outsourcing choices provide minimalist labor at extremely low prices. The Outsourcing Institute article “Offshore Business Process Outsourcing (BPO): New Doors to Value Creation” had the following to say about the success of offshore outsourcing:
Offshore BPO is one of the most significant business trends of our time. According to a recent Gartner Group study, worldwide BPO services will grow at an annual rate of almost 10 percent, climbing to a $173 billion market by 2007. Many organizations know that for a fraction of the cost, their back office operations (from application or loan processing to account reconciliation or billing services) and contact-center based customer care programs can be implemented both domestically and in countries like India. However, while labor rates are dramatically lower in other countries, those who view offshore BPO from the perspective of purely wage arbitrage may be missing out on the real value. (Outsourcing para. 1)
There are several choices when deciding what paths to take, when outsourcing offshore. The first of these is application and systems development. This branch of outsourcing allows for the cost effective development of key system components to be integrated on-site upon completion. In either of the two cases, whether system development or application development, these components can be completed in whole or in part by off site developmental departments selected by the client organization. This procedure generally entails intensive communication between the outsourced department and the on-site project team, who then reports to the project management. (BrickRed para. 2-6)
Outsourced technical support ranges between two venues. The first of these venues is organizational assistance. This assistance comes into play, when the client organization experiences technical difficulties with a product that was developed by an outsourced department. Normally, this entails extensive and haphazard telecommunications correspondence between the client organization and the offshore department. There is however an alternative method, that is beginning to gain popularity, that is quickly becoming available.
Offshore Tech is an outsourcing organization that is located offshore (relative to North America). This company offers a variety of services including development and design, Internet based operations centers, and crisis support. Along with it’s developmental service packages, it offers client organizations the opportunity to develop a personalized virtual technical support team that remains on call and communicates with the client’s organization via ” a secure correspondence channel”, and remains at a fixed cost. (Offshore para. 2)
The second venue of offshore technical support is the outsourced call-center department. This off-site department serves as a low cost labor pool that, with a minimal amount of training, serves to intercept and attempt to resolve technical issues posed by the client organizations clientele. Several companies have switched to using this feature in order to cut costs. Some of these listed in a PC World Magazine review include Microsoft, Dell, Symantec, and Adobe. (PC World para. 7)
Although offshore development and support does provide a simplistic and cost effective solution in the areas that it addresses, several domestic sources including PC World Magazine have questioned the performance and effectiveness of outsourced departments. One of the key issues behind these doubts is the apparent substandard communication skills of the outsourced department, when dealing with domestic clientele. In terms of design and development, this communication deficit can greatly impact the client organization’s productivity levels. Furthermore, with respect to customer support, the client organization may experience a loss of consumer numbers. This loss may be the result of the ineffective communication skills of the outsourced department. Frustrated or panicked customers that desire to receive prompt issue resolution or at least comprehensible reassurance may view this ineffectiveness as an irritant or an inconvenience. For these reasons, organizations must take great care in the decision to outsource, as well as the choices made therein. Any hindrances to the system may result in the loss of system flexibility or user mobility.
An article by Judith Hurwitz in DBMS Online states the following: “Access. This is the user interface layer. It is where a user formulates a request for information and receives a reply. Unlike the fat client model, the access layer in the Movable Enterprise model assumes that context (the application-specific logic) is outside the user interface. Therefore, the access layer is responsible only for displaying information, not maintaining it. Within this model, the access layer can be architected to run on whichever platform and interface are appropriate, such as telephones, personal digital assistants, Internet appliances, or interactive televisions.” (Hurwitz para. 4)
In essence, mobile computing is the ability to access a computer network through the use of remote locations or portable devices. Remote location access can include: Internet cafes, public access terminals, ATM machines, quick pay or similar transaction terminals, or other off-site terminal locations. If one of these remote locations becomes inaccessible or inconvenient, an individual may also choose to employ portable devices such as: a laptop, a cell phone, or a personal data assistant to access the network. This plethora of access options is the fundamental nature of an enterprise system model network’s flexibility and must be maintained in order to insure optimum network productivity.
Several communications companies, such as AT&T, Sprint, SBC, Time Warner, AOL, and many of their competitors, currently offer services that enable their clients to access the companies’ communications networks from remote locations and personal devices. Examples of these include retrieval of email, text messaging, and the use of Instant Messenger applications from the above-mentioned points of access. Several of the Instant Messenger service providers go so far as to make available Java Applet versions of their Instant Messaging programs, which can be accessed through any Internet browser without the need to install the full application. Though this java version of the application are limited in function, as they do not support file transfer nor advanced imagery, they add a new degree of mobility to their parent applications as well as the networks that those applications permit access to. While these applications do pose risk to the system, the benefits in convenience alone should be factored into the decision of making them available on the network or restricting them.
One new communications trend, which may help contribute to the availability of remote locations, is the replacement of standard pay phones with a public access data terminal. These data terminals can be found frequently on late-night or technical television channel infomercials. The data terminal, once in place, serves as a pay phone, Internet access terminal, directional guide, ATM, and automated loan service terminal. If these public access terminals become more prevalent, then gaining access to any given network will become much more convenient.
The enterprise system has many large and varied aspects. It is first and foremost a large decentralized system that most often will come to depend on legacy and other continuance strategies. The frequent maintenance and revisions of the system in order to keep it competitively up to date could quite possibly require the efforts of an entire department solely devoted to that end. Due to its expansive and multi-user nature, great efforts must invested into keeping the network secure from unwanted intrusion. These efforts by necessity should be extensive. These efforts should also be planned out and executed in such a way such that they do not hinder system productivity, otherwise they place undue burden on the system.
Being large in form, the enterprise may not be solely developed by the parent organization itself. If this is the case, careful consideration should be given to which outsourcing paths should be followed, and whether or not the added complication of utilizing offshore departments is both cost effective and time efficient. The enterprise network lives or dies, depending on the flexibility of its access. The network should posses as many remote and portable device access functions as possible. With all of these considerations in mind, an organization should be able to develop an optimized enterprise model system network, which will meet their business needs for several years. If the organization includes a balanced integration of all of these considerations then the system should remain stable throughout its life cycle.
(2001). Office of Information Technologies, University of Notre Dame
Dictionary of IE Terminology: section E, Retrieved October 21, 2004,
(2002). Offshore Technologies Virtual Staff, Retrieved October 21, 2004,
(2003). Hospitality Industry Technology Integration Standards White Paper:
Legacy Systems, [Electronic Version] Retrieved October 21, 2004, from
(2003). WEBNOX Corp.
Hyperdictionary: Legacy System Definition, Retrieved
October 21, 2004, from
(2004). BrickRed Offshore Outsourcing Co. Development Models, Retrieved
October 21, 2004, from http://www.brickred.com/outsourcing/models.jsp
(2004). Microsoft Corp. Windows XP Service Pack 2 with Advanced Security
Technologies Release Candidate 2 Fact Sheet, Retrieved October 21,
(2004). Symantec Inc. Customize Firewall Rules, Retrieved October 21, 2004,
from Norton Internet Security 2004 Help & Support Database.
Desmond, M. (2004). Misadventures in Tech Support, [Electronic Version] PC
World Magazine July 2004, Retrieved October 21, 2004, from
Hurwitz, J. (1996). The Moveable Enterprise: A New Architecture to Meet the
Changing Needs of the Virtual Corporation DBMS Online 9(9), Retrieved
October 21, 2004, from http://www.dbmsmag.com/9608d04.html
Klein, T. (2004). Offshore Business Process Outsourcing (BPO): New Doors to
Value Creation, Retrieved October 21, 2004, from The Outsourcing
Institute online Database.
Piccard, P. (2004). Risk Exposure: Instant Messaging and Peer-to-Peer Networks
v2.0, Retrieved October 21, 2004, from Internet Security Systems online
Rahmel, D. (1997). Database Security part 1, [Electronic Version] Retrieved
October 21, 2004, from GovernmentSecurity.org Network Security
Resources online Database.
Seacord, R. (2003). Tuning Up Legacy Systems: Continual Enhancements and
Modernization are the Best Defense Against System Obsolescence.
Optimize online magazine 4(22), Retrieved October 21, 2004, from
Ulrich, W.M. (2004). Whiteboard: Pulling the Plug on a Legacy System, CIO
Insight online magazine March 2003, Retrieved October 21, 2004, from