Today, I will be sharing with you some of our
experiences in developing and deploying DCE applications within the General
Motors environment. ...
The organization I work for within EDS is called Client Server Technology
Services - part of EDS' Engineering Services Division. Although we provide
services to many different EDS accounts, in many different industries,
this morning I will be focusing on the common Infrastructure EDS is deploying
within GM.
Specifically, I will be covering -
the GOALS that EDS and GM are trying to accomplish with the common
Infrastructure,
the PROCESS by which we gather requirements, develop and deploy the
Infrastructure,
the key SERVICES provided by the Infrastructure,
how the Infrastructure is MANAGED,
some of the APPLICATIONS being deployed on the Infrastructure,
and, finally, I will highlight some of the LESSONS LEARNED while implementing
the Infrastructure.
At the end I'll be glad to answer any questions you may have.
By the word "Infrastructure" I mean everything below the application. It
is made up of hardware, operating systems, communication software and
application services. It covers everything from desktop PC's, to Unix
workstations and servers, to Mainframe systems. It provides the basic
communication between resources on the network, such as a PC sending and
receiving data from a Unix server. It includes LAN's, campus, metropolitan
and wide area networks. The Infrastructure also provides a set of
services that any application can take advantage of. Such as a RDBMS or
queuing service.
/*
The Infrastructure, combined with strategic applications,
make up "GM's Common Systems".
*/
The GM Infrastructure will provide a consistent Infrastructure between the
Engineering, Plant and Office environments. In the past these were very
different and distinct computing environments and sharing data between them
was difficult, if not impossible. The common Infrastructure will allow all three
environments to communicate, sharing data and applications. Implicit in this
is that is that the Infrastructure deployed will support distributed,
client/server applications.
The idea of "Fast to Market" applies to application development and
deployment just as it does to manufacturing. The Infrastructure will
help reduce application development time by reducing the number of
things the application would normally have to be concerned with. It
will provide services that applications can use in a common, standardized way.
Every application does not have to reinvent the wheel. For example, the
Infrastructure will provide a naming service that any application can use,
in a similar way, to locate a resource. Having a set of common services, that
are used in a consistent manner, will reduce application
development time and aid in application maintenance.
Another goal of the Infrastructure is to provide a secure environment for
the applications - protecting against unauthorized access to the network and
it's resources, as well as, providing the applications with an acceptable level
of security. This has become increasingly important to GM because of the need
to share information with companies outside GM (such as suppliers, partners,
etc.)
Obviously, we are not going to convert all of GMs current applications
to take advantage of the Infrastructure immediately. New applications
will be developed using the common Infrastructure and existing, legacy,
applications will be migrated as required. There may also be some applications
that are never re-engineered to take advantage of services provided by the
Infrastructure. So, their will be a mix of applications coexisting on the
network - some part of the Infrastructure, some not.
Of GMs 100s of current applications, 60 have been identified as
"Common Systems" application, 12 have been converted, or are under development,
to take advantage of the Infrastructure, and, of the 12, 6 are actually DCE
applications. Later, we will look at two of the DCE applications in detail.
To protect the Infrastructure investment, industry trends
and standards play a key role in determining what technologies will
be used to implement the Infrastructure. The Infrastructure will evolve
over time, but it should not become obsolete and have to be wiped out
and replaced. By adhering to certain industry trends and standards we
are trying to reduce some of the risk involved with moving to the new
infrastructure.
We realized that this Infrastructure would not be stagnant - that as soon
as something was deployed on it, people would be talking about the next
upgrade. And that we would have to have a procedure in place for making
sure new applications, and, Infrastructure and application upgrades could
be deployed in an orderly fashion.
/*
How to move intelligently into a distributed client/server environment.
reduce risk, meet aggressive schedules, meet customer requirements
Application push
*/
It was clear from the beginning that for GM to move into a
distributed client/server environment, a process that would encompass
both application and Infrastructure development would have to be in place.
The list of disastrous, unproductive and costly mistakes would be very long if
we did not have a comprehensive plan for developing, deploying and managing
the Infrastructure and its applications. What if there was not an overall DCE
cell design strategy, with each application deploying their own cells? Every
time a user wanted to run a different application they would have to bring up
there machine in a different cell, and, potentially, login with different id's
and passwords.
It was also clear that a group of people would have to be responsible,
on an ongoing basis, for overseeing the Infrastructure - deciding what
technical direction it should go, gathering new Infrastructure requirements,
making sure different levels of software worked together (such as DCE 1.02
and 1.03), etc.
The Infrastructure requirements originate with the application
teams communicating with the central group responsible for planning
Infrastructure and application development activity. This central group
looks at requirements that are common to multiple applications as
candidates for incorporation into the Infrastructure. The group also
considers current existing standards, industry direction, product
availability and misc. business considerations (i.e. politics).
Requirements that the central planning group determine should be
incorporated into the Infrastructure go through a design, test, and
certification phase, while the application team is developing the
application. These two activities normally are completed at about the
same time. The application is then tested on the Infrastructure, piloted,
and released. Production support and management of the application
begins when it is actually deployed. I will be covering the production
management of the Infrastructure and it's applications later in the
presentation.
The normal plan is to have a production release of new or upgraded
Infrastructure and application components twice a year, typically,
April and October. This predefined combination of new and/or updated
components is referred to as a Block Point. And, April and October
are Block Point Release dates.
The amount of time from when planning begins to when the Block Point Release
actually occurs currently takes about 18 months. This will be reduced as
more Infrastructure components are deployed. So, we have multiple groups
of requirements, at different stages, going through the process at the same
time.
Obviously, this is an over simplification of the process, but, generally
speaking, this is how it works.
DCE was chosen to be the "glue" for the Infrastructure because it provided
a way to bridge the three environments, allowing the plant, engineering
and office environments to share resources. It also provided an integrated
and comprehensive set of services and tools that could be used in a common
way by application teams in in developing distributed, client/server
applications.
DCE also provides a pervasive security model capable of meeting many of
GM's security requirements; with the possibility of a single log-on,
compared to how it is now, with each application having it's own log-on
and each applying security it's own way.
Obviously, for any Infrastructure component to stand the test of time in a
company the size of GM, it must be flexible and scalable. GM has thousands
of users world wide, a wide range of hardware suppliers, and applications
running on everything from PC's to Mainframes. The ability of DCE to work in
a heterogeneous environment was critical to GM. DCE provides the flexibility
and scalability to address GMs global computing needs. It is based on a set of
specifications from OSF that has broad vendor support. This broad vendor
commitment to DCE will help reduce the risk of building distributed
applications on obsolete or proprietary technology.
/*
show that EDS is involved/committed, that we participate -
focus on industry (OSF) not just what we have done using DCE
*/
EDS has been involved with OSF since 1988. Initially most of our involvement
was with Motif. However, since 1992 we have been heavily involved with DCE.
We have participated in the End User Steering Committee, SIG's, and Challenge
Events (I-Fest) since 1992.
The EDS Technology Policy Guide, which provides guidelines on the use
of specific technologies, states that "DCE appears to be the software of
choice that will enable distributed computing".
Internally, we have many DCE labs, and are moving toward implementing
DCE on our own (EDS's) production Infrastructure. We have been working
with IBM on MVS/DCE products since 1992 and are evolving towards
an "Open Mainframe" environment - one that supports a Posix interface,
TCP/IP and DCE. One of the applications I will be talking about later
uses DCE on the Mainframe.
Some of the customers we have done DCE work for include: ...
Many of the services provided by the Infrastructure come directly from DCE.
The RPC mechanism, threads, security, directory and time services are all
provided by DCE. File system access is currently based on NFS, however, we
are starting to look at DFS. For Mainframe data access Encina, MDI, and,
in some cases, EDA/SQL are being used. Queuing is provided by Encina. Encina
also provides the load balancing and transaction integrity. Database
functionality is provided by Ingres, Oracle and DB2. Remote Cell look-up
services are provide X.500.
In some cases the Infrastructure team has provided API's on top of DCE
services to ensure that they are used in a standard way. This has been done
for DCE security and CDS and X.500 directory services.
/*
Within the new common Infrastructure, a given service can be
provided by more than one mechanism or be provided by more than one
vendor. For example, both DCE and Encina provide for load balancing. It's
up to the application to determine what mechanism to use for a given service.
*/
The distributed client/server environment is not necessarily a "cheaper"
environment than a mainframe environment. Sure, the cost of the hardware
is cheaper, but there are many hidden costs in the client/server environment,
particularly in the area of system administration and management.
What we have done in GM (and for many other customers, for that matter)
is look at how to
improve the business process of managing distributed systems. The idea
that each LAN must have it's own administrator, each doing administration
slightly differently, can result in a lack of uniform hardware, software, and
configurations that can make deploying applications a very complicated,
time consuming endeavor. What we are doing is centralizing administration
into regional centers call Distributed System Management Centers, or DSMCs.
These centers effectively automate LAN operations, improving productivity
throughout the organization. Activities that must be done locally are performed
by on-site personnel. Activities that can be supported remotely are performed
by the DSMC.
Help desk, LAN management, server and desktop management and other functions
are consolidated and applied uniformly throughout GM. The result is much
greater consistency and fewer mistakes. We currently have four DSMCs: one
in Toronto; Haupauge, New York; Plano, Texas; and Troy, Michigan, serving
GM and other customers.
DSMC Services include:
- Network Monitoring
- System Monitoring
- Backup and Restore
- Fault Management
- Inventory Management
- Resource Administration
- Security Management
- Software Distribution and Installation
- DCE Administration
- Database Administration
Re-engineering existing applications to take advantage of
client/server technology - for example, putting a GUI on a legacy
application - without looking at how the technology can be used to improve
the business process, would be missing out on one of the main benefits of
client/server technology.
The Data Translation Facility (DTF) was the first application deployed on
the Infrastructure, released in the 1994 October Block Point Release.
The purpose of the application is to automate the translation and transmission
of geometric, text and illustrated drawing files between CAD systems.
DTF is currently deployed in five DCE cells, four in the U.S. and one in Europe.
In addition to providing remote cell look-up services, X.500 also provides
DTF with a means to determine configuration information about all DTF sites.
Objects for DTF users, sites, file servers and databases are all stored in
X.500. The DTF application reads this information to provide the users with
choices about where to send data abd which CAD format to use. The information
stored in X.500 is fairly static and is administered by DSMC personnel using
a tool created by the Infrastructure team.
DTFs requirement for secure Mainframe access is really what started us looking
at DCE on the Mainframe. As I mentioned earlier, we have been very involved
with IBM since 1992 in making this a reality. And were the first to deploy a
production application in this environment.
We are moving toward a Mainframe environment that is Posix compliant, and
supports TCP/IP and DCE. We are currently identifying the operational,
management and support issues regarding an open environment on the Mainframe
and plan to start evaluating MVS 5.1 with OpenEdition and DCE in March of
this year with a May production date.
The Vehicle Order Mgmt. application (VOM) will assist GM in ordering
and tracking vehicles. There are two groups of users who will be using the
application: Dealers and Headquarters personnel. Both user environments have
PC client applications, however, only the HQ PC's have DCE. Eventually, GM
plans to extend DCE all the way to the dealership, however, there are many
issues with doing this and it will be a long time before it happens.
The first production release of the VOM application is scheduled for the
1995 April Block Point Release.
The network, the HP servers and Mainframe, Encina, DCE and DBMS's are all
part of the GM Infrastructure and, as such, are managed by the DSMC and are
available to be shared by multiple applications.
Key Infrastructure services used are DCE RPCs, DCE Directory and Security
services, the Oracle and DB2 Databases, Encina transactional RPCs, the PPC
Gateway and Encina Queuing.
Administration is difficult. There are few DCE tools, in most cases
we have had to write our own. DCE Administrators must have strong Unix
Administration backgrounds, attend DCE Administration classes and be given
time to develop there skills.
Establishing policies regarding name space and security usage are critical.
These must be published and make clear to both administrators and developers.
The Infrastructure team needs to provide documentation and guidelines on the
use of Infrastructure services.
Application teams can initially feel that they don't have as much control
as before. This is primarily a matter of education. Initially, they may be
concerned that a particular computer is not dedicated to their application,
or that a gateway or a database product is also serving users of other
applications. They may want to do there own administration, create their own
name space and security policies, etc. Generally, these concerns are coming
from teams that have little or no DCE and/or Unix experience. Once they
start getting into DCE, realizing what's involved with administration,
they see that the Infrastructure is providing a valuable service, allowing
them to focus on specific application functionality.
In GM, we have created something called the Application Launch Center which is
where application teams can find out what Infrastructure services are currently
available. It is imperative that the people responsible for the Infrastructure
partner with application teams, requirements for the Infrastructure come from
the application teams, and they must understand and participate in the
process for it to work.
The learning curve is steep for application teams, many are learning Unix and
DCE together for the first time. And, even for experienced developers,
debugging DCE applications can be difficult. The sooner pilot programs, along
with training, begin, the better.
Vendor product release dates slip. And, you can't assume that everything
interoperates or works the way it should, you may find yourself engineering
"temporary" solutions until the next release of a product is available.
All these issues combine to to make deliver dates very suspect. However, none
of them are insurmountable. The first application we deployed (DTF) was a
guinea pig, and we are continually learning and improving the process. The
bottom line is that we are having success developing, deploying and managing
DCE and Encina applications within GM. And, that Unix, DCE and Encina are
mature products capable of supporting distributed client/server mission
critical applications.
Any questions?
***********************************************************************
CTC Bank Demo Business Logic
The Bank Demo Application is a simple banking application
which allows the user to obtain the balance of an account
(inquire), deposit or withdraw money from an account, and
transfer money between checking and savings accounts.
The purpose of this application is to demonstrate mission critical
characteristics in a distributed application. The business logic
incorporated into the application is intended to demonstrate the
following characteristics:
- Transaction integrity
- Transaction queuing
- Fault tolerance
- Secure access levels
- Location and platform independence
- Transparent Client/Server communications
The application has a three tier architecture consisting of
a client, an application server and a database.
There are three versions of the client application, one for
bank Tellers, one for bank Managers and one that would be used
by bank Customers (an ATM). While the client application is a single
executable, the characteristics (Teller, Manager, or ATM) are determined
at runtime depending on which group the user running the client
belongs to. ATM and Teller users have withdrawal limits that can not
be exceeded. The client application also performs simple edits such
as verifying that numeric data is entered as numeric.
Deposits, withdrawals and transfers are written to the database in
realtime. However, deposits from the ATM client are written to a queue
and the database is only updated after a Manager has verified
the amount entered by the ATM user matches the amount actually
deposited. From the Manager client application, the Manager can choose
to accept or reject the deposit. By accepting the deposit, the database
is updated. When a deposit is rejected, the database is not updated.
In either case the deposit is removed from the queue.
The application will verify that the account from which the user is
withdrawing, depositing or transferring money does exist and that the
remaining balance will be greater than zero.
The application uses two databases from two different database vendors.
The database of Checking accounts is in an Oracle database while the
database of Savings accounts is in Ingres. Each database has its own
application server.
If the application server(s) are not available, transactions
are written to a queue and processed later by the application server(s)
when the database becomes available. If invalid transactions are written to
the queue (i.e. withdrawal from an account that does not exist), the
transaction is written to a log file.
When the user transfers money between a checking and savings account,
a two-phase commit is done between the two different databases across
the two different database products. A switch can be set allowing us
to stop the application servers after after the first part (the first SQL
statement) of the transaction has been executed. This causes the
transaction to fail and get rolled back, writing the transaction to the
queue. When the the application servers are restarted the queue is processed
and the transfer is performed.
===================================================================
Paul Frisch EDS
Systems Engineer 800 Tower Dr.
Email: pfrisc01@eng.eds.com Troy, MI 48098
===================================================================