Realtime CORBA

Distributed Object Technology

Draft 15 July 1996

Russ Johnston, Vic Wolfe & Mark Steele

 

 

 

 

INTRODUCTION

 

The DoD posture is changing very dynamically in the nineties and will continue to well into the 21st Century. In order to meet the challenges the systems of the future must be adaptable, evolvable and interoperable, meeting certain agreed upon standards which are being set in the commercial market place. The most important enablers for this change are globalization, specialization, decentralization and acceleration. The globalization refers to the need to support regional conflicts, disaster relief operations, combination of both, and multi-region conflicts/disaster relief. Distributed collaborative planning process is used in these types of operations. In the regional conflict there is the crisis assessment, course of action development, campaign planning, battle planning and force coordination (Tri-Service/Joint Forces). In the disaster relief there is the crisis assessment, relief planning, military coordination, and plan execution. The combination of both and mult-regional add addional complexity for the information systems to support distributed collaborative planning. The specialization is the need for the military/DoD to do what they do best and to partner with others to provide the needed support. The decentralization is the need to make decisions closer to the source of the problem and the use of collaborative planning to include resources, including people in a global manner. This will avoid the tendency to create decision bottlenecks at the top. Acceleration draws upon the first three trends because when so many changes are happening at once, there is an overriding feeling that change is accelerating, perhaps beyond the limits of our information systems and the ability to adequately respond. The interoperability of military and DoD systems is currently being addressed but is far from being solved. In addition, interoperability is being addressed at the civilian, joint military level and at the international level. The acceleration factor is taxing the ability to provide information systems which are evolvable. Distributed object computing and distributed systems provide a paradigm for information systems to meet the challenge of the 21st Century.

 

 

BACKGROUND

 

Distributed technology is undergoing a dramatic shift in the Client /Server paradigm. The transition is from the ethernet client/server to the global client server. The shift is four fold as follows: Single server LANs, Multiple server WANs, Objects (current), Software Agents (future). The enablers which drive the paradigm shift are the high speed, wide-area bandwidth, multi-threaded operating systems, high performance protocols, large memories, and high performance platforms. In addition, the maturing of distributed environments and distributed computing techniques over the last ten years has required software components to be developed which are portable, adaptable, survivable, and evolable to meet the dynamic system development process. These environments were based on components which support some aspects of object-oriented but did not take advantage of the full paradigm. The environment architecture supported language mappings that were C and Common Lisp, Pascal-like type definition, single inheritance and a unique identifier for object identification. Dynamic and highly portable applications the last few years were being designed with a C++. In addition, integration and migration of legacy software modules, libraries and utilities was a problem because these elements are integral to their infrastructure and could not take full advantage of the object-oriented paradigm such as inheritance, abstraction, polymorphism, and code reuse. This presented a mismatch between the environment and legacy software modules with the emerging C++ applications. Software development takes too long, costs too much, poorly documented and is too complicated to respond to evolving mission changes and requirments. Legacy systems consume 70%-85% of development resources to maintain them.

 

 

Heterogeneous Distributed Computing.

Diversity in hardware and software is a fact of life, and our networked computing environment is becoming more diverse, not less as computers evolve. Let us focus on the problems that this environment Creates:

 

Component software is making this possible. Applications are changing; commands are less willing to accept hugh, monolithic do-everything applications and are looking for smaller components that they can combine flexibly and dynamically to create tools focused on their particular needs. And components will work together only if they have been designed and built on standard interfaces as in the DII-COE spirit.

 

Object Technology.

What we need is a new way of looking at the entire problem--from problem statement and analysis through solution design and implementation to deployment, use, maintenance, and extension -- which integrates every component, and takes us in orderly fashion from each step to the next. Object technology is this new way, and we’ll begin to take a look at the concept of object technology and why it has become the new paradigm of computing. But first a look at how we got here. From the beginning of computing in the 1950’s until desktop computing became well established in the late ‘80s, computing was such a costly endeavor that programs and systems focused on conserving resources like memory, persistent storage, and input/output. This led to batch-oriented systems that updated periodically instead of continuously, and produced reports perhaps weekly and monthly. In many cases, these reports were out of date before they were printed.

 

That is computing focused on data, instead of the uses the data might have, and on computing procedures, instead of warrior processes. Computers, not people were in the driver seat, and the people who used computers had to translate data to the programs structure on input and translate back from the program structure to their needs when they received the output.

 

Modeling the Real World.

Hardware has changed a lot since the early days; in fact, the rapid pace of change sometimes seems like a problem itself. But there is no denying that abundant computing power, memory, storage, and data communications gives us an environment where we can mold computing to our needs instead of adapting to it. In a nutshell, object technology means computing that models the real world. An object in a computer program corresponds to a real object. Both to the programmer where he or she implements it, and to the user when he or she creates, manipulates, and uses it. And, computer objects work together to model the interactions of real-world objects.

 

Dealing with Complexity.

Complexity is the word that best describes the problems we’re facing today. The simple computing problems have been solved during the decades already past, and the countries that can’t handle the complex problems are going to fall by the wayside during the decades ahead. Fortunately, the tools to work the complex problems are available now.

 

Generations of problem-solving experience have shown that the best way of dealing with complexity is to split a hugh problem into a number of smaller, more manageable parts. But, we have to be careful how we do this, because the wrong split will make our problem more complex instead of simpler. For instance, some approaches split data from functionality; if we started with basically a data problem or a functionality problem, this simplifies our task by letting us focus on the core without distraction from the rest. But if we actually started with an integration problem, this division leaves us holding half the clues and half the tools, and still trying to solve the whole thing.

 

By splitting our computing problem into components that model the real world, we build on thing’s we’re all familiar with. Components will be easier for warrior, analyst, programmer, and user to grasp, and interactions between objects will appear logical and well founded.

 

But object-orientation does more than give us a real-world model. It also helps solve the three major problems of programming.

 

Object Oriented Software Integration.

The next three areas sum up our object-oriented solution to the software integration problem: First we’ll define what a CORBA object is; second we’ll get a first look at the basic architecture (that is CORBA), which allows all of the objects on our system to communicate; and third, we’ll start examining the higher-level architecture for the objects themselves, which allows applications to communicate -- OMG’s Object Management Architecture or OMA.

 

Without a doubt, out ultimate goal is application integration. But in order to get there OMG had to solve the interoperability problem first, to create a foundation for the work of integration. The interoperability problem is solved: OMG’s solution is CORBA. The integration solution is a work in progress: The base layer is CORBA services. The higher-level CORBA facilities which will yield the biggest payoff to the developer and end user, are just starting to emerge.

 

Objects.

Objects are descrete software components -- they contain data, and can manipulate it. Usually they model real-world objects, although sometimes it’s useful to create objects specifically for things we want to compute. Other software components send messages to objects with requests; the objects send other messages back with their responses.

 

In an enterprise, many objects model real-world entities. For example, a ships battery control program, might contain a number of magazine objects. Each of these would know how many shells it contains (because it had received this information in a message and stored it), what kinds of shells, and storage capacity. In response to requests, the magazine could respond by telling what it contained -- type of shells and amount. In a modeling calculation, the magazine object could "deliver" the shells to a loader by sending it a message, and stopping the flow when it became empty, just like a real magazine.

 

Usually objects in a computer are created from a template. For our magazine object, the template knows all of the properties that a magazine might have: a size, location, content type, content amount, connections and forth. When we use the template to create a magazine object "instance", we place values into the placeholders the template provides, in order to make this object represent a particular magazine (in object-speak, this template is termed a class).

 

The template also knows all the things a magazine can do -- that is the messages that we can send it, and ones that it can send to other objects. Each particular magazine object instance that we create has the same set of capabilities.

 

From the template, we can create as many magazine objects as we need or want. Each one has the same set of qualities that it cam store, and the same set of messages that it will respond to, and the same set of functions that it can perform(what if this template is almost, but not quite, what we need?). We can derive a new template from this one using inherence.

 

Flexibility is built in, since one of the things an object can do is send messages to other objects. We would expect, for instance, that our magazine object would talk to loader or damage control objects, and possibly to sensors that monitor temperature or humidity. Since each magazine can connect to it’s own set of other objects, a small set of templates representing these different object types will let us model a battery for a battery of any size and complexity.

 

A spreadsheet cell is another good example of an object: It contains data: probably a numerical value; perhaps instead of some text or a formula. It can perform operations: It can manipulate the data by displaying them on the screen or a printed page, perhaps a particular font, size and format; or by performing calculations; it can deliver its contents on request (perhaps to a linked document). Like many (but not all) objects, there can be many cells, all identical except for contents and address, and each created from the same template. The spreadsheet program with its loaded spreadsheet data -- a collection of objects -- is an object itself. The additional operations it can perform include displaying, printing, or formatting ranges of cells, or the entire spreadsheet.

 

We do consider passive collections of data to be objects. That means that our spreadsheet data file, or document data file, are not objects when they are stored on disk. Neither is a CD-ROM disk, nor the files on it. But the spreadsheet data file and document data file become intrinsic parts of an object when they are loaded into the programs that manipulate them, and the CD-ROM disk may become the data part of an object when it is loaded into a drive. Why this distinction? Because to us, an object is something that can take part on our component software environment, and that an object has to be more than just data.

 

Object Interoperability.

In order for objects to plug and play together in a useful way, clients have to know exactly what they can expect from every object they might call upon for a service. In CORBA, the services that an object provides are expressed in a contract that serves as the interface between it and the rest of our system. This contract serves two distinct purposes:

 

Each object needs one more thing: a unique handle that a client can pass to the infrastructure to route a message to it. We’re deliberately not calling it an address -- objects keep the same handle when they move from one location to another. Think of the handle as a kind of address with automatic forwarding.

 

Now we have a complete conceptual picture of our networked computing environment: Each node is an object with a well-defined interface, identified by a unique handle. Messages pass between a sending object and a target object; the target object is identified by its handle, and the message format is defined in an interface known to the system. This information enables the communications infrastructure to take care of the details.

 

These simple yet powerful concepts provide the fundamentals for CORBA’s component software revolution. The interfaces are expressed in OMG Interface Definition Language (OMG IDL) making them accessible to objects written in virtually any programming language, and the cross-platform communications architecture is the Common Object Request Broker Architecture(CORBA).

 

 

DEFINITION OF TECHNOLOGY

 

The recognition of the power of distributed objects which provide a self-reliant entity which is intelligent, can interoperate across heterogeneous languages, operating systems, databases, database management systems, schedulers, resource management systems, applications, tools, networks, protocols, and hardware. The distributed objects need a home. The infrastructure is that home which provides software mechanisms to support the global computing of the future. There are two standards currently which have different paradigms for the implementation of the infrastructure. The infrastructure is designed from the beginning to work with objects in a dynamic environment. The two standards are CORBA/OpenDoc from the Object Management Group/CI Labs and COM/OLE both from Microsoft. Both of these standards are evolving and will support the creation of intelligent components infrastructures that can navigate on the high performance networks and collaborate with other components across the enterprise. A component is an object or objects which has been packaged for use on the network and could loosely be referred to as hardened software to accomplish a specific task in the network community. The strength of the Common Object Request Broker Architecture (CORBA) specification is its technical edge, supported by the fact that it is based on the classical distributed environments development that has been evolving for the past ten plus years, weakness is performance. The strength of COM/OLE is Microsoft and the super software developers clout and the weakness is legacy Application Programming Interfaces and possibility performance. There is joint meetings between the Object Management Group and Microsoft to develop a path for interoperability between the two emerging standards.

 

CORBA is highly portable across heterogeneous platforms and COM/OLE is not currently available in a distributed release. The issue of performance is important. Emerging technologies do not always meet up to the expectations matching performance or increasing it by an order of magnitude. The power provide by the distributed objects and supporting infrastructure will improve in time and the capability, adaptability and portability will out way the performance in importance initially. A benchmark the NRaD, Distributed Object Technology Team, uses is if the evolving technology has merit, can perform at seventy-five percent of the current implementation/system specification and there is an identifiable path for improvement then the risk of technology development and implementation is worth taking. CORBA enhanced with OpenDoc provides shared class libraries an interface definition language which provides interfaces which are platform independent. The Object Request Brokers (ORBs) provide a symmetrical relationship in that an object can be a client or a server depending on requirements and expectations of the object/component. The CORBA intelligent infrastructure provides a level of abstraction that allows functionality and applications to be designed, developed and packaged in parallel. Independent software developers can now focus on developing new functionality without having to reinvent the entire infrastructure. The standard interface and level of abstraction provided by the infrastructure provides a logical, repeatable, proven interface instead of a collection of application programming interfaces where each entity has a custom interface. The testing phase will not be unbounded because the infrastructure will be a know quantity and does not need to be reinvented for each systems or the evolvable extensions that systems under go. The single CORBA intelligent infrastructure will support components that run in the same process and components that run across the global network. CORBA is providing a seamless environment for accessing resources globally. Stand alone applications and global applications can be supported with an intelligent infrastructure which is scaleable. Software agents will be deployed on the network and infrastructure to perform endless tasks such as: statistical gathering, linkages of data, workflow, priority service, etc. These software agents could be viewed as the next generation subroutines which have very specific tasks to perform over and over. The agents are global instead of local. The CORBA 2.0 specification released in late 1995 provides for interoperability which will enable OBRs from various implementers/vendors to communicate and exchange objects. In addition, repository technology can be implemented in a heterogeneous environment. The main components of CORBA are as follows: Object Bus, Object Services (system frameworks), Common Facilities (applications frameworks), Business Objects, and Interoperability and Collaboration support. The Object Management Group philosophy is for products to be available 18 months after the adoption of an enhanced specification.

 

 

The OMA: Applications Level Integration.

CORBA is a great accomplishment but is connects only objects, not applications. Enterprise integration requires a lot more than this, and OMG provides it in the Object Management Architecture or OMA. Of course, it’s based on CORBA, but this is just a starting point.

 

The OMA embodies OMG’s vision for the component software environment. One of the organization’s earliest products, this architecture shows how standardization of component interfaces will penetrate up to -- although not into -- application objects in order to create a plug and play component software environment based on object technology. Application objects, although not standardized by OMG will access CORBAservices and CORBAfacilities through standard interfaces to provide benefits to both providers and end users: for providers, lower development costs and an expanded user community; for end users, o lower cost software environment that can easily be configured to their specific needs.

 

Based on the CORBA architecture we just discussed, the OMA specifies a set of standard interfaces and functions for each component. Different vendor’s implementations of the interfaces and their functionality then plug-and-play on customers’ networks, allowing integration of additional functionality from purchased modules or in house development. The OMA is divided into two major components:lower-level CORBAservices and intermediate-level CORBAfacilities.

 

 

 

 

 

The CORBA Services provide basic functionality that almost any object might need: object lifecycle services such as move and copy, naming and directory services, and other basics. Basic does not necessarily mean simple, however: included are object-oriented access to on-line transaction processing (OTLP), the mainstream application for business accounting and a sophisticated object Trader service -- kind of a "yellow pages" where objects advertise the availability (and price!) of their services.

 

Where the CORBA Services provide services for objects, the CORBA Facilities provide services for applications. For instance the Compound Document Management CORBA Facility (OpenDoc) gives applications a standard way to access the components of a compound document. With this in place, a vendor could easily generate a sophisticated set of tools for advanced manipulation of one part o the document -- images say -- and market them without having to generate the basic functionality him or herself. Sophisticated users could upgrade one part of an application suite to a very high level if they require, without having to modify or substitute for the remaining parts.

 

It is the CORBA Facilities that give meaning to our promise of application integration. The CORBA Facilities has two major components: one, horizontal, including facilities such as compound document services just mentioned, which can be used by virtually every business; and the other, vertical, standardizing management of information specialized to a particular industry group. So large is the scope of this effort that the CORBA Facilities will eventually dwarf CORBA and the CORBA Services in size. OMG does not plan to produce all of the CORBA Facilities standards itself, by the way, the organization has put in place procedures to incorporate other consortia standards as CORBA Facilities as long as they conform to the rest of the OMA.

 

CORBA Benefits.

So why move to CORBA and the OMA? Here’s a summary of the benefits form two points of view: First for your developers -- the people who design and produce your applications; and second, for your users -- not just people who run your applications but the organization itself with its own complex set of requirements.

 

Software Development.

The software development environment should support the developers and programmers by providing an object oriented environment and tools to assist in the architecture, design, interfaces, implementation, programming, testing and change. The following capabilities provide a strong CORBA baseline for software development.

 

Give then an IDL interface and a thin layer of wrapper code, and legacy applications come into the CORBA environment on an equal basis with your new software components. Since you have to keep going full speed while you bring in distributed computing and object orientation, this is essential to enabling enterprises to make the transaction. The CORBA/OMA environment maximizes programmer productivity: CORBA provides a sophisticated base, with transparent distribution and easy access to components. The COBRAservices provide the necessary object-oriented foundation, while CORBAfacilities will standardize management of shared information. Developers create or assemble application objects in this environment, taking advantage of every component. This standard CORBA/OMA environment helps three ways:: Your programmers don’t have to build the tools, their provided for you; the same set of CORBAservices and CORBAfacilities are available with every CORBA environment, so both your applications and your programmers port from one platform/ORB to another; and interoperability results because clients on one platform know how to invoke standard operations on objects on any other platform.

You can mix and match tools within a project -- develop a desktop component using an interactive

builder, for instance, while you write its server module in a lower-level language like C++. CORBA

will allow the two to interoperate smoothly.

 

 

For Your Users and Your Country.

You need to solve the entire integration problem in order to survive, but you also need to devote maximum resources to widening your technological edge in order to compete. There’s only one way to do this: Use industry standards to get your country into the playing field the quickest and cheapest way; then devote the resources you saved toward building an edge that beats your competition hands-down.

 

For your users a CORBA/OMA application is a dynamic collection of client and object implementation components, configured and connected at run time to attack the problem at hand. It may include and integrate:

 

History.

The three architecture areas: object request broker, object services, and common facitlites. CORBA, Version 1.1, which covers the object request brokers (ORBs) was adopted in December 1991. The CORBA revision proces comprises upwardly compatible extensions. CORBA, Version 1.2 , provided editorial corrections and did not change the ORBs functionality was released in June 1994. CORBA 2.0 extensions include the ORB interoperability and futher specification of the Interface Repository and Implementation Repository. Future CORBA 2.0 activities include additional language bindings such as Ada, Cobol and Smalltalk. Event notification, object naming and object life cycles are approved object services specifications. The revision added the following: object adapters availability, other tha just the basic; compliance validation, multimedia support, transaction support and concurrency control. The adoption process for common facitities is being developed. The object services are fundamental enabling service specifications where common facitlties are high-level service specifications that will provide high leverage to application developers. The security specification was completed the first of 1996. In January, 1996, a special interest group was formed to initiate the realtime CORBA and access the requirements and interest in providing realtime extensions to the CORBA model. Object services and common facilities supply the interfaces for the application-to-application interoperability as well as commercially supplied services. These standard interfaces have a dual role in which suppliers may provide implementations of standard services and application developers also can reuse standard interfaces.

 

 

CORBA Services-- CORBAservices: Common Object Services Specification

Component 1

Object Event Notification Service -- OMG Document 94.1.1

Object Life Cycle Service -- OMG Document 94.1.1

Object Naming Service -- OMG Document 94.1.1

Object Persistence Service -- OMG Document 94.1.1 & 94.10.7

Component 2

Object Concurrence Service -- OMG Document 94.5.8

Object Externalization Service -- OMG Document 94.9.15

Object Relationships Service -- OMG Document 94.5.5

Object Transaction Service -- OMG Document 94.8.4

Component 3

Object Security Service -- OMG Document 95.12.1

Object Time Service -- OMG Document 95.11.8

Component 4

Object Licensing Service -- OMG Document 95.3.7

Object Properties Service -- OMG Document 96.6.1

Object Query Service -- OMG Document 95.1.1

Component 5

Interface Type Versioning -- OMG RFP 95.11.10

Object Collections Service -- OMG Document 96.5.5

Object Trader Service -- OMG Document 96.5.6

Object Startup Service -- OMG Document 94.10.24

CORBA Facilities--CORBAfacilities : Common Facilities

Common Facilities -- OMG document 94.8.10 -- Common Facilities is a collection of higher level services that provide broadly applicable to many applications or high value capabilities for specific industries.

 

CORBA Facilities - CORBAfacilities: Common Facilities

 

Common Facilities -- OMG document 94.8.10 -- Common Facilities is a collection of higher level services that provide broadly applicable to many applications or high value capabilities for specific industries. A "Common Facility" defines the interfaces and sequencing semantics that are widely available and are most commonly used to support building well-formed applications in a distributed object environment. Common Facilities is a logical extension of the OMG activities in OMA, CORBA and COSS. Common facilities comprises the set of adopted specifications that are most relevant to application needs. Common Facility focus is towards interoperability, where OMA, CORBA, and COSS are focused towards portability issues.

Common Facilities Roadmap.

CORBA Applications - Client/Server

Appendix A contains the application examples from the users and developers of the CORBA technology. The reprints are news releases for programs, projects and development based on the CORBA technology. The vertical CORBA Facilities represent a wide selection of business areas which are as follows: Telecommunications, Computer Development , Software Development, Television, Energy, Financial, Manufacturing, Banking, Medical, Research, and Defense.

-- AT&T (Articles included as Appendix A) ORB -- NCR Cooperative Frameworks

-- DEC (Articles included as Appendix A) ORB -- ObjectBroker

Swiss Telecom PTT -- Telecommunciations

-- Expersoft (Articles included as Appendix A) ORB -- XShell

TV/COM International Inc. -- conversion of Canada’s analog TV to digital

Universal Oil Products -- refinery design

Goldman Sachs -- financial

Canadian Imperial Bank of Commerce -- financial

Fidelity Investments -- financial

National Bank of Australia -- financial

-- IBM (Articles included as Appendix A) ORB -- SOM

-- IONA (Articles included as Appendix A) ORB -- Orbix

Tradepoint -- financial

Motorola -- Telecommunciations

Ericsson -- Telecommunciations

Boeing -- Manufacturing

Schlumberger -- Manufacturing

Asea Brown -- Manufacturing

Swiss Bank -- Banking

Chemical Bank -- Banking

TASC -- Defense

CWC -- Defense

Baxter -- Medical

SGI -- Computing

Los Alamos National Labs -- Research

BIBA -- Research

-- SUN (Articles included as Appendix A) ORB -- NEO

Realtime.

The Realtime CORBA Special Interest Group (SIG) was formed in January 1996 . Russ Johnston, NRaD, is a SIG member from the conception and is the DoD representative (supporting DISA Center for Standards) to the OMG Realtime CORBA SIG.

Dock Allen

email: dock_allen@omg.org

Co. Chair: Peter Krupp

email: Peter_Krupp@omg.org

 

Available Realtime Implimentations.

Orbix - operating system platforms QNX, VxWorks, LynxOS, NT, and Sun Solaris

CORBA Security.

This is a service defined by a OMG document titled "CORBA Security" dated December 1995

with OMG Document number 95.12.1. This document defines security as follows:

 

Security protects an information system from unauthorized attempts to access information or interfere with its operation. It is concerned with:

 

Security is enforced using the security functionality described in 95.12.1. In addition, there are constraints on how the system is constructed, for example, to ensure adequate separation of objects so that they don’t interfere with each other and separation of users’ duties so that damage an individual user can do is limited.

 

The key features defined in the CORBA security specification comprise:

 

This visible security functionality uses other security functionality such as cryptography, which is used in support of many of the other functions but is not visible outside the security services. No direct use of cryptography is proposed by the specification nor are any API defined.

 

For complete details on this service please refer to the OMG document specified above (308 pg.).

 

CORBA Supporting Collateral Standards.

 

X/Open X11R6 FRESCO portability standard (XPG4). Fresco is a user interface framework that supports graphics, widgets, and embedded applications. It is CORBA compliant, in the sense that all application program Interfaces (APIs) are specified in OMG Interface Definition Language (IDL) The reference implementation for Fresco is written in C++ and uses a library based object request broker (ORB). OMG IDL enables Fresco to support multiple languages (current and future bindings of the OMG IDL) as well as straightforward translation of software to distributed computing. Fresco had a sample release May 1994 and a final release in conjunction with the X-windows standard X11R6.

 

Apple, IBM, WordPerfect, OpenDoc embedded documents. OMG Compound Presentation and Compound Interchange Facilities (Apple computer, Component Integration Lab., IBM, and Novel) December 13 1995 -- This accepted standard specification is derived from the OpenDoc standard. If the C4I community treats the user desktop as a compound document then the OpenDoc concepts and features can be applied and the benefits realized. OpenDoc is a first teir extension to the CORBA specification defining additional API allowing the construction of compound user information sets across a distributed environment. OpenDoc allows application developers to take a modular approach to the development and maintenance. Its component-software architecture can make the design, development, testing and marketing of integrated software packages easier and more reliable as compared to their monolithic counterparts. Developers can make incremental changes to component products without affecting unrelated components, and can therefore get those changes to users far more rapidly than is possible using the monolithic approach. OpenDoc components can allow users to assemble customized COMPOUND DOCUMENTS out of diverse types of data. They can also support cross-platform sharing of information. They resolve user frustrations that occur at the boundaries between conventional, Monolithic applications. Table 1 modified from the OMG document for C4I summarizes the advantages of the OpenDoc approach to software, both for users and developers.

 

 

Table 1. OpenDoc Advantages

 

for Users For Software For Software for

Engineers Project Management Development

Teams

Components

Are...

...Modular Easy to add or Easy to upgrade Great Flexibility Can create a

replace interface components. Easier in assembling component that

parts to test components. packages works seamlessly

with all others.

 

...Small Code for individ- Easier to design, Easier, cheaper Faster develop-

ual components code, debug, and distribution ment, easier dis-

takes up much test tribution of a

less disk space component

and memory.

 

...Cross- User Interfaces Can Leverage Opportunities Application of

platform travel across development for increased limited resources

users select effort on one user responsiveness can be used

familiar interf- platform to others across platforms

aces on each.

 

OpenDoc has several major goals, in addition to the OMG requirements, each is equally important to the success of the package.

OpenDoc includes a set of interoperability protocols the allows code produced by independent development teams to cooperate to produce a single document for the end user. APIs designed to allow these cooperating executables to negotiate about human interface resources, document layout on the screen and on printing devices, share storage containers and create data links to one another are provided.

OpenDoc documents must have the ability to be passed through review cycles with minimal pain on the part of users. OpenDoc includes a draft capability which allows work to be performed on multiple versions of a document.

OpenDoc is designed as a Cross-platform architecture.

OpenDoc allows the replacement of the implementation of any of its subsystems on any platform. All media-data for persistent storage is fully documented so that interoperability is maintained.

OpenDoc does not specify drawing systems, coordinate systems, window systems, human interface guidelines or many other platform specific elements. This makes the architecture more generally available for use across platforms while standards for these other areas are developed.

 

OSF DCE. The Open Systems Foundation (OSF) Distributed Computing Environment (DCE) provides a standard procedure based mechanism for remote procedure calls between heterogeneous systems to accomplish programming tasks. CORBA provides a wrapper to encapsulate the DCE mechanism and make it object technology. DCE is only one of the possible Communication Mechanisms available to CORBA 2.0 compliant ORBs. DCE is a product and has good security features which has made it attractive for system implementations. Vendors/Developers like DEC who participated in the DCE development have plans to layer CORBA on top of the DCE lower level mechanisms. This would allow systems to use DCE security mechanisms in a CORBA environment.

 

Object Database Management Group ODMG-93. The Object Database Management Group (ODMG) is a working group within the OMG. It was formed as the result of the lack of an ODBMS standard and frustration with the slow progress of standards bodies such as the ANSI X3H2 committee. The ODMG first met in the fall of 1991. It originally had five members (Object Design Inc., O2 Technology, Versant Object Technology, Objectivity Inc. and Ontos) and was organized by Rick Cattell (of SunSoft). Its purpose was to create an object database standard that can be used to ensure application portability between conforming ODBMS implementations. The ODMG has achieved much success in its mission and continues to work towards the development of ODBMS standard. The ODMG-93 standard was published by the ODMG as a book at the beginning of 1994.

 

Microsoft DistributedCommon Object Model (DCOM). Microsoft in conjunction with Digital Equipment Corporation is producing a specification that will bridge Microsoft’s proprietary technology with the OMG/CILab technologies (CORBA 2.0/OpenDoc) open technologies. The current attempt will create a bridge to allow messages to pass into and out of DCOM into the wider CORBA world. COM provides a narrower view of the computing world. In it’s current implementation of OLE2 COM does not handle a distributed environment using heterogeneous operating systems and mixed communications mechanisms. Both of these features are implicit in CORBA, therefor the bridge. The current view is the clients (PCs) will use DCOM and the Servers will use the CORBA and/or DCE. At the present time CORBA is the only available middleware standard which provides the services. DCOM will be a strong contender of CORBA because Microsoft plans to proivide DCOM with the NT 4.0 operating system do to be released in the December time frame. Note that Microsoft has change the name in the last two months from OLE/COM to DCOM. There distributed services have been confusing to allow because of series of name changes over the last couple of years.

 

 

 

STATUS OF THE TECHNOLOGY

 

CORBA Products Directory.

NCR Cooperative Frameworks -- ORB Toolkit of Over 300 C++ classes

HP/UX, Sun Solaris, NCR Unix, MS Windows 3.1, Netware v3.11

DEC ObjectBroker -- includes bridge to COM

HP/UX, Windows, Windows/NT, Macintosh, SunOS, AIX, OSF/1,Ultra, OpenVMS

Expersoft Xshell

SunSPARC, HP700, IBM RS6000, SCO Unix, SGI

HP Distributed Smalltalk

HP/UX, IBM RS 6000, SunSPARC Solaris

IBM SOM

OS|2, AIX, Windows

ICL Distributed Application Integration System

SunOS, Solaris, VMS, OSF/1, HP/UX, Windows, Windows/NT, OS|2, SCO UNIX,

AIX, SRV4, OpenVME

IONA Orbix

Solaris, IRIX, HP/UX, Windows, Windows/NT, SCO UNIX, OSF/1, OS|2, AIX,

SunOS

ISIS Reliable Distributed Objects

SunOS, Solaris, HP/UX, AIX, VMS

 

Compatibility and Migration support.

Environments. Some form of the OMG ORB operates in the following environments:

HP-UX, VMS, AIX, Windows&Windows/NT, OS/2, Solaris,DG UNIX, Macintosh,OSF/1, IRIX, PC.

Languages. IDL Supported Languages

C, C++, Smalltalk, ADA -- These languages currently have or have in progress mappings from the OMG CORBA 2.0 IDL. This allows the definition of language neutral interfaces between objects created in different programming environments.

Public Domain Toolkit. The Object Management Group has a public domain IDL Toolkit at ftp.omg.org.

 

 

COMMERCIAL ASPECTS OF THE TECHNOLOGY

 

Commercial Focus.

The commercial focus is providing the CORBA standards which define the interfaces and services. The exact implementation is left to the implementors of the specification. The interoperability is a major factor. The communications between the ORBs is one level but the processing of object from heterogeneous ORBs still has to be demonstrated..

 

Developers.

Each developer is providing extended services and capabilities which in many cases are needed but these services are not specified in the standard. If a system designer uses an ORB with the extended services and capabilities of a specific ORB developer, then the system is dependent on that specific implementation and will not be fully interoperable accross a heterogeneous suite of ORBs, where each has specific extensions

 

Technology Impact for DoD.

The impact on DoD to evolve from the stove pipe systems to the enterprise wide systems which have open object-oriented architecture will be major and very costly. The cost not to move forward will be even more costly over the long haul. The next twenty years will see a very fast paced evolution/revolution to very refined, robuts middleware standards and products. The complexity initially will be very expensive but in time will be very cost effective. Much the same way the hardware evolution has gone from SSI Intergrated Circuits to super VHSIC integrated circuits (the $500 calculator which is now $12).

 

CORBA - DCE - DCOM.

When considering CORBA, DCE and DCOM which method is best for solving the DoD and military requirements for distributed computing?

 

Given the complexity of today’s problems, programmers can no longer use traditional top down and structured design techniques without running the risk of "losing sight of the forest for all of the trees." As such, the commercial market is rapidly moving toward object oriented approaches, allowing the programmer to "create the trees" first, trust that the trees exist, and then "worry about assembling the forest" later.

 

If the best way to go is the object-oriented approach, how then do CORBA, NFS/DCE and/or DCOM fit in?

 

CORBA (developed by 600+ companies in the software industry) and DCOM (Microsoft) are specifications. OLE2 (Microsoft) is an implementation. DCE (the Open Software Foundation -- another consortium) is both a specification and an implementation. All vary in the degree with which they are object oriented.

 

The Common Object Requester Broker Architecture (CORBA) is a set of specifications. The Object Management Group (OMG), owner of the specification set, does not produce or sell code. Instead, OMG specifies a common architecture, including a common Object Requester Broker (ORB), allowing interoperability between vendor software middleware, across dissimilar platforms. The CORBA specification also forces the creation of only true objects -- a requirement for registration with the CORBA backbone ORB. There is no one true or unique CORBA implementation but several companies have their own commercial software libraries (implentations). So, there is no single CORBA "equivalent" to OLE2 for COM.

 

The Component Object Model (COM), Microsoft’s unique specification and competitor with CORBA, is language neutral (you can use multiple languages) but is not environmentally neutral (you cannot interoperate across operating systems, e.g., Win95 and Windows NT). Along with DEC, Microsoft, however, is striving to make it both environmentally neutral and capable of cross platform operation. Additionally, COM is object-based not object-oriented, i.e., not all of the features associated with objects exist in the COM environment. This, then, affects scaleability. COM requires that all COM based components exist before a user can request one from the pool. There is no mechanism to manufacture components by instantiation on the fly as there is in a CORBA implementation -- meaning, you have to know how big the problem is before you solve it. Once the solution is in place, evolving, modifying or "growing" it will be difficult.

 

OLE2, often referred to in literature as "OLE2/COM," is Microsoft’s proprietary implementation of the COM specification. At present, OLE2/COM does not distribute across platforms. Microsoft is working on this but a UNIX implentation will not be available in the near future.

 

The third candidate, Distributed Computing Environment (DCE), is both a specification and an implementation. Open System Foundation (OSF) member companies co-staff a facility in Massachusetts where the staff creates source code. Once completed, the code is shipped to the membership, where it may be further modified to meet a specific member’s requirements.

 

DCE is a very complex, low level mechanism for executing procedures across a distributed environment. DCE is commonly wrapped into higher level API to reduce the complexity for application programmers. Common "wrappers" or middleware include Ellery Open Systems (EOS), used to develop NASA’s Astrophysics Data System (ADS) and Earth Data System (EDS), and CORBA. EOS is a much older implementation than CORBA but it does not totally mask the DCE API and it is not object-oriented. Masking DCE is important because DCE uses over 500 API calls. By masking that many calls, risk of creating faulty code is greatly diminished.

 

Given the foregoing, the best approach for the Navy to use is a CORBA implementation. Given the number of companies, world-wide, which have invested in CORBA and, given Microsoft’s recent decision to join OMG rather than to continue to fight them with COM, it is highly unlikely that a reasonable alternative to CORBA will appear anytime soon. Hence, for the foreseeable future, if the Navy wants to follow the commercial community and benefit from the commercial software industry’s massive economies of scale, CORBA is the way to go.

OLE supports a feature called Automation. OLE Automation provides for a way for an application to manipulate other applications in a command-like fashion. For example, a user in a word processing program can invoke a command to enlarge an image created by another program. OLE Drag and Drop allows users to move objects from one application to another. This is particularly convenient when a document is being composed of a large number of components. OLE is not a contiguous architecture, residing instead on top of the Component Object Model, described next.

 

NON REALTIME TECHNOLOGY

TBD

 

REALTIME TECHNOLOGY

 

Introduction.

With the advent of increasingly complex applications that depend on timely execution (such as military command and control, tele-communication, automated manufacturing, medical patient monitoring, and multi-media), many distributed computer systems must support components capable of realtime distributed processing. That is, these applications require enforcement of end-to-end timing constraints on service requests from clients in a distributed computing environment. For example, the client may impose a deadline by which its requested service must be performed. These service requests may require exceptionally fast processing. However, "fast" processing may not always be sufficient; some applications may require processing that a certain quality of service predictably occurs within timing constraints.

 

Current realtime technology does not completely meet these requirements. Realtime operating systems have recently become available that provide realtime features such as priority-based CPU scheduling and priority inheritance-based synchronization primitives. Realtime networking is developing fast and predictable communication. Realtime databases are starting to emerge that provide realtime coordination to shared data. However, these components alone are not sufficient to realize a realtime distributed application. The integration of these components requires middleware with realtime support.

 

A few vendors have ported their ORBs to RT operating systems, which is a necessary step in developing RT CORBA, but again alone is not sufficient. Research and development in applications such as ReTina telecom and Lockheed-Martin aerospace has established the utility as feasibility of RT CORBA. Projects to design and prototype realtime extensions to commercial ORBs at government research labs and universities further demonstrate the feasibility of RT CORBA.

 

Although other SIGs and Task Forces within OMG are developing CORBA extensions that will be useful for realtime, such as streaming from the Telecom Task Force, they too are not sufficient. The ability to enforce end-to-end timing constraints, through techniques such as global priority-based scheduling, must be addressed across the CORBA standard. This is the focus of the RT PSIG and the topic of this whitepaper. The four main objectives are to :

 

1) To define common terminology and concepts for use in developing realtime (RT) extensions to CORBA.

 

2) To show that it should be done: To introduce the Realtime CORBA market, including the needs and size of this market.

 

3) To show that it has not been done: To show what technology extensions are necessary to meet these needs.

 

4) To show that it could be done: to indicate that realtime extensions to CORBA are feasible for vendors to produce.

 

The approach is to assume an architecture that consists of an underlying system, an ORB, and Object Services - all with realtime requirements. The realtime requirements on the underlying system include the use of realtime operating systems on the nodes in the distributed systems and of a realtime communication medium among the nodes. The specific requirements for such an underlying system are presented in Section 4.1. A realtime ORB must be capable of supporting enforcement of end-to-end timing constraints. Its specific requirements are presented in Section 4.2 and some suggested ORB modifications are presented in Section 6.2. Requirements on additional realtime Object Services, such as a Global Priority Service, and a Global Guarantee service that guarantees a certain quality of service under specified timing constraints, are presented in Section 4.2; a description of these services is presented in Secion 6.3.

 

The realtime section is structured as follows. Section 2 addresses Objective 1 by presenting definitions and a general discussion of realtime distributed computing. These definitions and concepts will be used throughout the whitepaper and will serve as a basis for a common set of terminology to be used in the RT SIG and eventual CORBA standard. Section 3 addresses Objective 2 by surveying the requirements from the naval command, control, communication, and intelligence (C3I) application domains to show their needs for CORBA/RT. Section 4 synthesizes these requirements to a general set of requirements for RT CORBA. Section 5 addresses Objective 3 by showing that implementing CORBA on a realtime operating system and communication system is not sufficient to meet the requirements of Section 4. Section 6 addresses Objective 4 by defining a set of core technologies and capabilities that provide the minimal support required for realtime systems. We will then define extensions in the form of additional services and facilities that provide more sophisticated and capable support.

 

Definition.

This section provides general definitions and background on realtime systems. In a realtime system, timing constraints must be met for the application to be correct. This definition of realtime typically comes from the system interacting with the physical environment. The environment produces stimuli, which must be accepted by the realtime system within timing constraints. For instance, in an air traffic control system, the environment consists of aircraft that must be monitored. The environmental stimuli is fed into the system through sensors such as radar. The environment further requires control output, which must be produced within timing constraints. In the airtraffic control example, signals to the aircraft and displays to the human tracker have timing constraints that must be met. Time-constrained output can be even more important in applications such as autopilot aircraft control where the output of the system directly controls the environment and has tight timing constraints that must be met. In a distributed realtime system, these timing constraints are end-to-end and often require the use and scheduling of several resources (e.g. CPUs on each node and the communication medium between them).

 

One of the main misconceptions about realtime computing is that it is equivalent to fast computing. Stankovic challenges this myth in [Stankovic88] by arguing that computing speed is often measured in average case performance, whereas to guarantee timing behavior, in many realtime systems worst case performance should be used. That is, in a delicate application, such as avionics control, where timing constraints must be met, worst case performance must be used when designing and analyzing the system. Thus, although speed is often a necessary component of a realtime system, it is often not sufficient. Instead, predictably meeting timing constraints to a desired level (guaranteed timing constraints to best-effort at meeting timing constraints) is sufficient in realtime system design.

 

In this section we will discuss three kinds of realtime applications:

 

1) Embedded Applications. Embedded applications are small and have a known, fixed environemnt, such as device controllers. They are small and simple enough to allow exhaustive testing of timing behavior and thus often sacrifice flexible dynamic realtime computing techniques in favor of faster, more efficient techniques. That is, fast computingmay be sufficient in these applications.

 

2) Static Applications. Static applications are too large for timing analysis through exhaustive testing, but still exhibit known execution times, known activities, and relatively well-known environmental interactions (they may be able to accomadate some limited environmental variance). The execution is usually periodic. Tracking applications, manufacturing, and larger control systems are examples. A priori timing analysis is usually possible and desirable in these applications.

 

3) Dynamic Applications. These applications are large, adaptive and their environment can change. Telecommunications and electronic finance are examples. There are still timing constraints that must be met, but a priori anlaysis of the system's ability to meet them is often impossible. These systems require dynamic, flexible scheduling of resource utilization.

 

System Requirements.

Realtime systems require that timing constraints be expressed, enforced, and their violations handled. In a realtime CORBA system a client's method invocation to a server is one example of a time-constrained activity. The entire client or a client's series of method invocations might also be time constrained activities. Timing constraint expression can take the form of start times, deadlines, and periods for activities. Timing constraint enforcement requires predictable bounds on activities. The handling of timing constraint violations depends on the activity's requirements: whether they are "hard", "firm" or "soft" realtime. We now examine each of these aspects of timing constraints.

 

Timing Constraints.

Most realtime systems specify a subset of the following constraints:

 

o An "earliest start time" constraint specifies an absolute time before which the activity may not start. That is, the task must wait for the specified time before it may start.

 

o A "latest start time" constraint specifies an absolute time before which the activity must start. That is, the activity has not started by the specified time, an error has occurred. Latest start times are useful to detect potential violations of planned schedules or eventual deadline violations before they actually occur.

 

o A "deadline" specifies an absolute time before which the activity must complete.

 

Frequently, timing constraints will appear as "periodic execution constraints". A periodic constraint specifies earliest start times and deadlines at regular time intervals for repeated instances of a activity. Typically a "period frame" is established for each instance of the (repeated) activity. As shown in Figure 1.

 

*** Insert Figure Showing Timeline With Periods *****

 

In Figure 1, period frame i specifies the default earliest start time and deadline for the i-th instance of the activity. When periodic execution is originally started, the first frame is established, at time s in Figure 1. For periodic execution with period p, the i-th frame starts at time s + (i-1)p and completes at time s + (i)p. As this indicates, the end of frame i is the beginning of frame i+1. Each instance of an activity may execute anywhere within its period frame.

 

Modes.

Realtime constraints are classified as hard, firm, or soft, depending on the consequences of the constraint being violated. An activity with a "hard" realtime constraint has disastrous consequences if its constraint is violated.. Many constraints in life-critical systems, such as nuclear reactor control and military

vehicle control, are hard realtime constraints. An activity with a "firm" realtime constraint has no value to the system if its constraint is violated. Many financial applications have firm constraints with no value if a deadline is missed. An activity with a "soft" realtime constraint has decreasing, but usually non-negative, value to the system if its constraint is violated. For most applications, most activities have soft realtime constraints. Graphic display updates are one of many examples of activities with soft realtime constraints.

In some systems the mode of realtime is captured in an activity's "importance" level. In some systems activity importance is categorized according to the mode of its timing constraint (hard, firm, soft). In other systems, importance is more general and activities can be assigned importance relative to each other over a wider granularity of levels. For instance, in avionics, activities for navigation are typically of higher importance than activities for data logging. Note that importance is not the same as "priority". Priority is a relative value used to make scheduling decisions. Often priority is a function of importance, but also can depend on timing constraints, or some combination of these, or other, activity traits.

 

Realtime constraints are classified as hard, firm, or soft, depending on the consequences of the constraint being violated.

 

***Insert Figure With Value Function Graphs For Hard, Soft, Firm ***

 

 

An activity with a "hard" realtime constraint has disastrous consequences if its constraint is violated. This characteristic is depicted in Figure 2A where the activity causes a large negative value to the system if its deadline is missed. Many constraints in life-critical systems, such as nuclear reactor control and military

vehicle control, are hard realtime constraints.

 

An activity with a "firm" realtime constraint has no value to the system if its constraint is violated. This characteristic is depicted in Figure 2B where the activity's value goes to zero after its deadline. Many financial applications have firm constraints with no value if a deadline is missed.

 

An activity with a "soft" realtime constraint has decreasing, but usually non-negative, value to the system if its constraint is violated. This characteristic is depicted in Figure 2C where the activity's value decreases after its deadline. For most applications, most activities have soft realtime constraints. Graphic display updates are one of many examples of activities with soft realtime constraints.

 

In some systems the mode of realtime is captured in an activity's "importance" level. In some systems activity importance is categorized according to the mode of its timing constraint (hard, firm, soft). In other systems, importance is more general and activities can be assigned importance relative to each other over a wider granularity of levels. For instance, in avionics, activities for navigation are typically of higher importance than activities for data logging. Note that importance is not the same as "priority". Priority, which is discussed in more detail in Section 2.2, is a relative value used to make scheduling decisions. Often priority is a function of importance, but also can depend on timing constraints, or some combination of these, or other, activity traits.

 

Predictability.

In order to predictably meet timing constraints, it must be possible to accurately analyze timing behavior. To analyze timing behavior, the scheduling algorithm for each resource and the amount of time that

activities use each resource must be known. To fully guarantee this timing behavior, these resource utilizations should be worst case values; although some soft and firm realtime systems can tolerate

average case values that offer no strong guarantee.

 

Determining the resource usage time of activities is often difficult. Results that can be obtained are often pessimistic worst cases with very low probability of occurring. Consider CPU utilization, which is one of the easier utilizations to determine. To establish a worst case CPU utilization, all conditional branches in the activity must be assumed to take their worst path, and all loops and recursion must be assumed to have some bounded number of instances. Also, other activity behavior, such as whether the activity can be preempted from the CPU while waiting for other system resources, must be considered. For instance, if an activity requires dynamic memory allocation, is it swapped off the CPU awaiting the memory allocation? CPU utilization is only one factor in an activity's resource requirements. An activity also needs resources such as main memory, disk accesses, network buffers, network bandwidth, I/O devices, a CORBA system services such as Name Service, etc. Furthermore, the use of these resources is inter-related and thus can not be computed in isolation. The problem worsens in distributed systems where resource management is typically not centralized. There has been limited research work in determining worst case execution times, but in general this work makes limiting assumptions and/or produces very pessimistic results. Still, in order to guarantee, or at least analyze, the adherence to realtime constraints, resource utilizations of activities must be known, so these rough estimates are used.

 

Assuming that worst case resource utilizations are known, analyzing timing behavior for predictability depends on the scheduling algorithms used. In the Section 2.2 we discuss several realtime scheduling techniques and the forms of analysis that these techniques facilitate.

 

Quality of Service.

The introduction of timing constraints adds another dimension to realtime computing: it may need to yield a lower "quality of service". That is, due to inadequate time to fully compute accurate results, the results produced by an activity or set of activities may not be exactly correct. In systems where timely but less accurate results are better than late exact results, such imprecision may be tolerated. For instance, an air traffic control system may need a quick approximate position of an incoming aircraft rather than a late exact answer. Often the imprecision that is allowed must be within a specified bound. For instance, the position data might have to be accurate within a few meters. More accurate data is desirable, but can be sacrificed if timing constraints do not permit it.

 

In realtime object-oriented systems, quality of service can be enforced through "performance polymorphism". Performance polymorphism was first fully modeled by Shin [Shin94]. It extends the object oriented notion of polymorphism and dynamic binding, which typically allows run-time selection of the appropriate method to execute among many possibilities. Usually these possible methods are on the same object interface, often arrived at through inheritance. In performance polymorphism, the selection of the appropriate method is made based on quality of service; either QOS that is allowed or that is requested. "Allowed QOS" is the best accuracy that timing constraints allow. "Requested QOS" is the achieving the minimal requested accuracy in the least possible time.

 

 

Scheduling.

Realtime scheduling algorithms seek to assign tasks to system resources so that timing constraints are met. There are many realtime scheduling algorithms. Typically, a scheduling algorithm assigns "priorities" to activities. The priority assignment establishes a partial ordering among activities. Whenever a scheduling decision is made, the scheduler selects the activity(s) with highest priority to use the resource.

 

Prioirty Inheritance. Liu and Layland's elegant results come with strong assumptions. Among these assumptions is one which requires that activities are independent. Subsequent work has relaxed many of their assumptions. Rajkumar, Sha and others have shown that activity sets where the activities can coordinate via mechanisms such as semaphores, can still be analyzed if they use "priority inheritance" protocols. In these protocols, a lower-priority activity that blocks a higher-priority activity (e.g. by holding a semaphore), inherits the priority of the higher-priority activity during the blocking. With priority inheritance techniques, "priority inversion", which is the time that a higher priority activity is blocked by lower priority activities, can be bounded and factored into the worst case execution time of each activity. Utilization analysis, such as Liu and Layland's, can then be used to determine if timing constraints will be met. Non-periodic activities can also be accommodated using a "sporadic server" which periodically handles non-periodic activities. Note that to use rate-monotonic scheduling, the operating system needs at least a priority-based, preemptive scheduler, with priority inheritance, and all resource allocation and contention (e.g. resource queues) based on priority.

 

"Optimal realtime scheduling" involves creating a schedule of resource use that meets all timing constraints whenever it is possible to meet them. That is, an optimal scheduler creates a schedule that meets timing constraints if such a schedule exists. Note that in overloaded system, it may not be possible to meet all timing constraints and thus no scheduler can do it. Optimal realtime scheduling, in general, is an NP-hard problem. However, hueristics have been developed that yield optimal schedules under some strong assumptions, or near-optimal results under less-restrictive assumptions.

 

There are many scheduling algorithms. Typically, a scheduling algorithm assigns "priorities" to activities. The priority assignment establishes a partial ordering among activities. Whenever a scheduling decision is made, the scheduler selects the activity(s) with highest priority to use the resource. There are several characteristics that differentiate scheduling algorithms. They are:

 

o *Preemptive versus nonpreemptive* -- If the algorithm is preemptive, the activity currently using the resource can be replaced by another activity (typically of higher priority).

 

o *Hard versus soft realtime* -- To be useful in systems with hard realtime constraints, the realtime scheduling technique should allow analysis of the hard timing constraints to determine if the constraints will be predictably met. For firm and soft realtime, predictability is desirable, but often a scheduling technique that can demonstrate a best-effort, near-optimal performance, is acceptable.

 

o *Dynamic versus static* -- In static scheduling algorithms, all activities and their characteristics are known before scheduling decisions are made. Typically activity priorities are assigned before run-time and are not changed. Dynamic scheduling algorithms allow activities sets to change and usually allow for activity priorities to change. Dynamic scheduling decisions are made at run-time.

 

o *Single versus multiple resources* -- single resource scheduling manages one resource in isolation. In many well-known scheduling algorithms, this resource is a single CPU. Multiple resource scheduling algorithms recognize that most activities need multiple resources and schedule several resources. End-to-end schedulers schedule all resources required by the activities.

 

Rate-Monotonic & Earliest-Deadline-First Scheduling.

For a known set of independent, periodic activities with known execution times, Liu and Layland proved that "rate-monotonic" CPU scheduling is optimal [LiuLayland73]. Rate-monotonic scheduling is preemptive, static, single resource scheduling that can be used for hard realtime. Priority is assigned according to the rate at which a periodic activity needs to execute: The higher the rate, which also means the shorter the period, the higher the priority. A supplemental result by Liu and Layland facilitates realtime analysis by proving that if the CPU utilization is less than approximately 69%, then the activity set will always meet its deadlines. For "dynamic" priority assignment that is also preemptive and single resource, Liu and Layland showed that earliest-deadline-first scheduling is optimal and that any activity set using it with a utilization less than 100% will meet all deadlines.

 

Liu and Layland's elegant results come with strong assumptions. Among these assumptions is one which requires that activities are independent. Subsequent work has relaxed many of their assumptions. Rajkumar, Sha and others have shown that activity sets where the activities can coordinate via mechanisms such as semaphores, can still be analyzed if they use "priority inheritance" protocols [Rajkumar89]. In these protocols, a lower-priority activity that blocks a higher-priority activity (e.g. by holding a semaphore), inherits the priority of the higher-priority activity during the blocking. With priority inheritance techniques, "priority inversion", which is the time that a higher priority activity is blocked by lower priority activities, can be bounded and factored into the worst case execution time of each activity. Utilization analysis, such as Liu and Layland's, can then be used to determine if timing constraints will be met [Rajkumar89]. Non-periodic activities can also be accommodated using a "sporadic server" which periodically handles non-periodic activities. Note that to use rate-monotonic scheduling, the operating system needs at least a priority-based, preemptive scheduler, with priority inheritance, and all resource allocation and contention (e.g. resource queues) based on priority.

 

POSIX Realtime Scheduling.

The POSIX realtime operating system standards offer rudimentary realtime scheduling support in Unix-like systems. The POSIX standard mandates that the CPU scheduling be preemptive, priority-based, with a minimum of 32 priorities. Individual implementations may offer more priorities, but the minimum is 32. The scheduling algorithm is simple: the highest priority ready activity executes, possibly preempting a lower priority activity. Activities may dynamically change their own priority level or, in some cases, the priority level of other activities. Within a priority level, activities may be scheduled round-robin (with a system determined time quantum), or first-in-first-out (which is essentially round-robin with an infinite time quantum). This intra-priority scheduling is an unfortunate choice for realtime systems since it is not cognizant of timing constraints. Despite this limitation, rate-monotonic and earliest-deadline-first realtime schedulers have been built on realtime POSIX-compliant operating systems.

 

Resource Reservations.

Zhao, Ramamritham, and Stankovic [Zhao87] have developed realtime scheduling techniques that can handle dynamically arriving activities that may use multiple resources. These techniques are based on resource reservations where each activity attempts to reserve a time slot for itself during which it is guaranteed use of all resources that it requires. Activities are allowed to make resource reservation requests based on a priority; the highest priority makes the request first. Priority is a weighted function of the activity's deadline, execution time, and resource use. These techniques also allow for limited backtracking so that if allowing an activity to make reservations before another activity would cause deadlines to be missed, another pattern of reservations may be tried. They have demonstrated that their techniques achieve near-optimal results. Note that although these reservation techniques are not optimal, they have much less stringent assumptions than those of Liu and Layland.

 

Scheduling Imprecise Computation.

Liu [Liu91] proposed several algorithms for scheduling activities that allow imprecise computation. In these algorithms, activities are decomposed into a "mandatory" part and an "optional" part. The mandatory part is

considered hard, the optional part is considered soft. Their algorithms attempt to schedule all mandatory parts of activities and to schedule optional parts to minimize some error metric. The error metric indicates the consequences of not executing an optional part. Liu discusses several error metrics, each with a different scheduling algorithm that minimizes it. For instance, in systems with activity importance levels, error might be weighted by each activity's importance. The accompanying algorithm schedules optional parts of higher

importance activities whenever possible. Although this work is a good first step towards managing the imprecision in realtime systems, further work on guaranteeing that resulting imprecision is within

system limits is needed.

 

Fast CORBA.

Our stated definition of realtime involves timing constraints being met for the computation to be correct. An alternate definition of realtime that is sometimes used, and that we are careful to dispell as our definition, is "fast computing". Fast computing must be done quickly, often faster than similar typical computing. Again, note that fast computing may be necessary for realtime computing, but that it is not sufficient. However, a "Fast CORBA", that involves an optimized ORB that is much faster than typical ORBs, may be necessary

to achieve realtime. Fast computing should be considered as part of realtime computing, and fast CORBA techniques need to be explored. "Realtime CORBA", however, involves more (such as scheduling techniques) than simply Fast CORBA.

 

Military C4I RealtimeCORBA Requirements )

TBD

 

 

Realtime CORBA

 

**** Add figure with a matrix of requirements below and how they related to markets ***

 

Introduction.

This section outlines general requirements for realtime CORBA. There are two main parts: changes to the CORBA standard and requirements on the CORBA operating environment (e.g. operating systems and networks). Although requirements on the operating environment are not typically specified in CORBA, the nature of realtime requirements dictate that the operating environment demonstrate some characteristics, such as priority-based operating system scheduling and bounded network message delay. Therefore, we describe these requirements here.

 

This section only specifies requirements on the underlying system and CORBA standard that are unique to realtime. After each requirement, a tag enclosed in square brackets [] indicates how the requirement applies to embedded (E), static (S), and dynamic (D) realtime applications (as defined in Section 2). A plus (+) following the letter indicates that the requirement is necessary for that class of application. A star (*) following the letter indicates that the requirement could be useful for that class of application, but is not necessary. A minus (-) following a letter indicates that the requirement is probably undesirable for that class of application (e.g. could introduce too much run-time overhead). No symbol following the letter indicates that the requirement would not impact the application. For instance, a tag of [E-,S*,D+] indicates that the requirement may be harmful in embedded realtime applications, is desirable in static realtime applications, and is necessary in dynamic realtime applications.

 

Operating Environment.

The nature of both hard and soft realtime, but particularly hard realtime, is that the enforcement of timing constraints is only as good as its weakest link. Therefore, Realtime CORBA imposes several requirements on its underlying operating environment. Our preliminary position is that Realtime CORBA will of necessity be implemented on top of some realtime Operating System (RTOS) and at least one realtime communication system (RT COMM), and that the implementation must take into account the independent use of the RTOS services by applications. The CORBA implementation must provide one of the following: a complete description of how it utilized the RTOS and RT COMM; documentation telling application developers how they can safely use the RTOS and RT COMM.

 

Synchronized Clocks[E*,S+,D+]. All clocks on nodes in an ORB must be synchronized to within a bounded skew of each other. For products supporting ORB interoperability, clocks on different interoperating ORBS must also be synchronized to within a bounded skew, but this skew may be different than individual skews of participating ORBs. The CORBA product must ensure the intra-ORB skew bound and, if applicable, the inter-ORB skew bound, and publish its skew bound.

 

Justification: a common notion of time is needed to enforce timing constraints across the system and to provide a synchronization mechanism.

 

Bounded Message Delay[E+,S+,D+]. The underlying communication mechanism must ensure a worst-case message delay from one CORBA system task to another. For products supporting ORB interoperability, message delays between tasks on different interoperating ORBS must also be bounded, but this bound may be different than individual message delay bounds of participating ORBs. The CORBA product is not necessarily responsible for enforcing this bound but may instead recommend **rely on** underlying network technology to achieve this requirement with the CORBA product.

 

Justification: To reason about time and coordination in a distributed system, the worst-case message delay must be known.

 

 

Priority-based Scheduling[E*,S+,D+]. All components used in the underlying CORBA environment should support priority-based scheduling and queueing where a higher priority task is scheduled before a lower-priority task. This scheduling should be preemptive where possible (such as CPU scheduling). For example, realtime operating systems that comply to the IEEE realtime POSIX standard meet this requirement for CPU scheduling. Other resources such as network interface cards and network switches should also use priority-based scheduling and queueing.

 

Justification: Most static and dynamic realtime scheduling is implemented with priority-based low-level scheduling (how these priorities are assigned is a higher-level matter and addressed in Section 4.2). Any component not enforcing priorities (such as having a first-come-first-served queue instead of a priority queue) weakens the soft realtime enforcement and makes hard realtime analysis impossible.

 

Priority Inheritance[E*,D+,S+]. All components used in the underlying CORBA environment that synchronize tasks by blocking one task for another should implement priority-inheritance.

 

Justification: This requirement is needed in soft realtime to better enforce priorities. It is required in hard realtime to bounded the blocking time of tasks, which in turn is required to analyze their response time. As an example, an operating system semaphore used to coordinate tasks should elevate the priority of an executing task hold**ing** a semaphore to the priority of the highest-priority task that is waiting on that semaphore. Single-level priority inheritance in the operating environment is sufficient. The priority inheritance discussed for the ORB and Object Services in Section 4.2 must be transitive.

 

 

Resource Monitoring[E-,S-,D*]. For some dynamic hard realtime applications, the CORBA system must be able to determine the current utilization profile of each component over any arbitrary time

interval. The system should also provide monitoring of computational events (e.g. method invocation) and engineering events (e.g. thread creation).

 

Justification: This requirement results in information being provided to the ORB to allow the ORB to determine a priori whether a resource can satisfy a request within timing constraints. This ability can be necessary in dynamic hard realtime and useful in soft realtime. QOS enforcement also involves monitoring of events.

 

Multiple Protocols[E-,S*,D*]. The underlying system should allow the introduction and simultaneous execution of multiple communication protocol stacks.

 

Justification: This allows different Quality of Service to be supported in communication.

 

 

Multi-threading[E*,S*,D*+]. The underlying system should support multi-threading or some other form of concurrency.

 

Justification: Multi-threading allows for better establishment of realtime priorities and for better concurrent use of resources. Multi-threading should be supported in the operating systems and other parts of the environment. Application-level threading is described in Requirement R4.2.11.

 

 

4.2 REQUIREMENTS ON ORB AND ORB SERVICES

 

Requirements on a realtime distributed operating environment fall into two categories: the architectural features and the architectural services. Architectural features outlines what real-time features should be present and why. The second deals with their implementation using Services, loosely defined as any code inside some objects or the system internal implementation that guarantee the desired features. Figure 1 summarizes all the features and services we found to be necessary or desirable in a system supporting real-time requirements. In this discussion, we assume that the distributed system uses realtime operating systems.

 

 

Architecture Features

- timed method invocations

- synchronized clocks

- bounded communications

- preemptive priority-based scheduling

- priority queues

- priority inheritance

- quality of service (not required)

- RT exception mechanism (not required)

 

Architecture Services

- Global Time

- Priority Assignment

- Scheduling

- Reservations (hard real-time)

- Gateway (security)

- Naming

- Event

- Life Cycle

- Concurrency Control

 

Figure 1. List of realtime distributed architecture requirements.

 

In order to implement timing constraints on an execution of a task we have to take into account the delivery time of the request and the reply. Thus, each method call has to carry some timing information indicating when, for example, it should be delivered or how much time is allowed until the method return is expected. To make this time information meaningful across nodes, system-wide synchronized clocks are required. To ensure the predictability during the execution of method calls, it is necessary to have bounded communications, i.e. we need to know what maximum amount of time it would take for a call to complete in the worst possible case. If the calls are unbounded, by the time the method starts executing on a server its deadline might have already passed and we are wasting our time even trying to do the task. The client has to specify enough information (for example execution times of server’s methods, timing constraints and/or quality of service parameters) for the server to choose which method to execute. With the potential multitude of various methods being called in the system, we need to ensure that the tasks of higher importance receive proper treatment. Hence, a scheduling based on priority is in order. For priority-based scheduling to work, the system must support priority queues in all places where queuing is expected as well as priority inheritance. If even one bottleneck place (in the system) were to exist that does not support priority queuing, then the whole system’s priority scheduling becomes unpredictable.

 

Services.

Naming service: Enables objects to be located by name. The names are located in some registry(s) that maps names to object (or interface) pointers. Once an object is found, a pointer to its interface is returned to the calling application. This service must exist for clients to find their servers.

 

Life Cycle service: anages the objects’ life-span. This service could be centralized or distributed, depending on whether some entities wish to keep track of existing objects or to leave that responsibility to the objects themselves. In the centralized implementation every new object has to register with a database and deregister when it goes away. The distributed implementation may mandate objects to keep reference counters, incrementing one for each connected client and decrementing when client disconnects. When the reference counter goes to zero (no clients are interested in the service) the server object may go away or choose to stay "alive" if it is required frequently.

 

Priority Assignment service: Assigns priority to a client or task. This service as well may be implemented as centralized or distributed. In the centralized implementation each client needs to ask some Priority Server (PS) object for a priority assignment before it can proceed to connect to any servers. The PS could be ‘universal’ - for all objects in the system or ‘local’ - serving some pre-defined subset of objects (for example in a hierarchical system it can serve only some branch of the hierarchical tree). In the distributed version, it may be possible to use some heuristic such that clients themselves can calculate their priority, however in this case the priority must be independent of the other clients in the system. All priority assignments must be synchronized with the scheduling policy enforced by the Scheduling Service (discussed next).

 

Scheduling service: Designates the order in which tasks or methods execute. The scheduling policy strictly depends on the mode of the real-time system desired (hard/firm/soft) and the known information about the tasks. Rate-montonic or earliest-deadline-first priority assignments can be made, as can other forms, to achieve realtime scheduling. Each of the algorithms uses a different heuristic for priority assignment. For example, rate-monotonic scheduling assigns higher priorities to tasks with shorter periods. EDF assigns higher priorities to tasks with tighter deadlines. Other algorithms assign priorities based on the importance of the task. Thus, the Scheduling and the Priority services should be implemented to complement each other. Regardless of the chosen heuristic, each step of the task execution must have an associated priority and scheduling as well as queuing to enforce that priority ordering.

 

Note that both the Priority Service and the Scheduling Service do not actually have to be implemented, but can be thought of as concepts instead. For example in some hard real-time systems the clients may have pre-determined, hardwired priorities that were computed off-line that they will use throughout the system. If the priority queuing is properly implemented, the system will properly operate in priority-based manner without actually calling any on-line priority service or scheduler.

 

Event service: Manages real-time events occurring in the system. For example, some objects may wish to be notified when some events pertaining to other objects occur. A client, that wants to be informed when a particular server obtains a piece of information may file a request with the Event service and expect a call back when the server information becomes available. Likewise, a whole transaction of tasks may start executing, triggered by an event broadcasted by the Event service. The events must be in priority order at the collection objects.

 

Concurrency Control Service: Controls the logical and temporal consistency of the client/server interactions throughout the system. It is desirable for the objects themselves to have a real-time CC in the internal processing, while the system service can concentrate on object to object activity. A RTCC that supports four real-time concurrency control requirements: transaction temporal consistency, transaction logical consistency, data temporal consistency and data logical consistency should be implemented to support real-time scheduling. A semantic real-time concurrency control technique exists that requires compatibility tables in each object, defining which method invocation interleavings are allowed at any given time. Semantic locking is used to enforce the allowable concurrency expressed by the compatibility function of an object. A task (or transaction) must acquire a semantic lock for a method invocation before the method is allowed to execute. A RTCC should also support priority inheritance to better enforce priorities and to bound the blocking time of real-time tasks.

 

 

R4.2.1 Time Type[E*,S+,D+]. The CORBA standard must specify a standard type for absolute time and relative time.

 

Justification. This is needed to express timing constraints and to coordinate based on time. This may be an IDL change or simply a standard type included in realtime CORBA implementations.

 

R4.2.2 Transmittal of Realtime Method Invocation Information[E-,S+,D+]. The standard must allow the following client information to be established by the client and attached to its method invocation request so that the information is available to the ORB, ORB Services, skeletons, and server implementations (for definitions of these terms, see Section 2 or the glossary):

- Deadline [E-,S+,D+]

- Importance [E-,S+,D+]

- Earliest Start Time [E-,S+,D+]

- Latest Start Time [E-,S*,D+]

- Period [E-,S+,D+]

- Guarantee? [E-,S*,D+] -

whether or not the client wants an a priori guarantee that the method invocation will meet timing constraints.

- QOS [E-,S*,D+] (what quality of service is required? This allows a

polymorphic choice of service by the ORB and server implementation).

- [Anything else folks????]

 

 

Justification: This information is likely needed by the ORB to set priorities, alarms and whatever else is needed to enforce the requirements. specified in the information. The stubs, ORB, underlying system, BOA, skeletons, and server may all need to know this information in order to enforce the requirements specified by the information, so it should be "attached" to the method invocation.

 

 

R4.2.4 Global Priority [E,S+,D+]. The ORB must establish priorities for all execution. These priorities must be "global" across the ORB. That is, the priorities of any tasks that compete for any resource in the CORBA environment must be set relatively to each other.

 

Justification: It does not make sense for a task to compete for a resource with a task that has priority assigned from a different mechanism. If two tasks are competing, the ORB and low-level schedulers must be able to determine which to execute first. This requires that the competing tasks have priorities that "make sense" relative to each other.

 

Several priority schemes, such as rate-monotonic priority assignment, earliest-deadline-first priority assignment, or a variation that weights tasks based on importance, are possible. The standard should not dictate how priorities are set. Instead, the standard should specify that the information needed to set priorities is available, and that the priorities will be enforced.

 

Embedded systems may not need (or want) this service since they can be scheduled off-line without priorities.

 

 

R4.2.5 Priority Queueing of All CORBA Services[E*,S+,D+]. All CORBA-level software must use priority based queuing.

 

Justification: Enforcing priorities at *all* points in the end-to-end path, including CORBA service requests, is desirable for soft realtime and necessary for hard realtime. For instance, queues of requests for CORBA 2.0 services such as Naming or Lifecycle should be priority queues.

 

 

R4.2.6 Events For Timing Constraints[E-,S,D+]. The CORBA environment must provide the ability for clients and servers to determine the absolute time value of "events". These events may include the current time (provided by a Global Time Service), or named events provided by the CORBA 2.0 Event Service.

 

Justification: The specification of timing constraints (Requirement 4.2.2) requires the determination of the time for the constraint. These times are absolute times. Most application specifications denote these times as relative offsets from events. For instance "within 10 seconds" typically means "within 10 seconds of the current time" and thus needs the current time event. "Within 10 seconds of completion of Task A" needs the time that a named event for "completion of Task A" occurred.

 

 

R4.2.7 Priority Inheritance[E-,S+,D+]. All CORBA/RT-level software that blocks one task while another is executing should use priority inheritance.

 

Justification: This requirement is needed in soft realtime to better enforce priorities. It is required in hard realtime to bounded the blocking time of tasks, which in turn is required to analyze their response time. This requirement includes the locking done by the CORBA 2.0 Concurrency Control Service, but also includes simple queuing such as waiting for Name Service. It also includes transitive priority inheritance where one server calling another service observes priority inheritance at all places in the service call chain.

 

 

R4.2.8 Performance Polymorphism[E-,S,D*]. The CORBA/RT environment should support Performance Polymorphism, where the actual method that a client invokes on a server is determined by the timing constraints established by the client.

 

Justification. Performance Polymorphism is an essential component to supporting Quality of Service parameters found in many realtime applications. Two forms of performance polymorphism support can be

provided. "Client Controlled" performance polymorphism simply requires that the client is provided enough information (such as execution times of server methods) to choose which method to invoke. Note that this form is relative simple to implement, but expected to be slower for the client to execute. "ORB Controlled" performance polymorphism requires that the client specify its timing constraints and QOS which method to execute. "ORB Controlled" is harder to implement, but would yield a faster interface for the client.

 

R4.2.9 Realtime Exceptions[E-,S+,D+]. The CORBA exception mechanism must be extended to raise the following exceptions for handling within the context of the CORBA exception handling mechanism:

 

MISSED DEADLINE - this exception is raised in the client when its specified deadline has expired and its requested service has not completed.

MISSED DEADLINE is raised in the server when the server's interpretation of the client's deadline expires.

 

MISSED PERIOD - this exception is raised in the client when its specified end of period has expired and its requested service has not completed.

MISSED PERIOD is raised in the server when the server's interpretation of the client's end of period expires.

 

MISSED START - this exception is raised in the client and server when the client's specified latest start time was not met by the server.

 

NO GUARANTEE - this exception is raised in the client before a method execution if a client requests a guarantee of a certain quality of service within timing constraints and the ORB can not provide the guarantee.

 

Justification: these exceptions allow user-level recovery from realtime exceptions.

 

R4.2.10 API For Protocols[E-,S*,D*]. The CORBA/RT standard should allow for the ability to dynamically indicate which communication protocol stack is to be used and to set protocol parameters.

 

Justification: In many applications, such as telecommunications, legacy protocols and new protocols exist and must be managed. Managing the various protocols also allows for QOS determination.

 

R4.2.11 Application-Level Concurrency[E*,S*,D*]. The CORBA/RT standard should support and application-level concurrency package, such as multi-threading. This package should support event-to-thread mapping.

 

Justification: Multi-threading allows for better establishment of realtime priorities and for better concurrent use of resources.In realtime systems, many threads are executed based on events, thus event-to-thread mapping is important.

 

R4.2.12 Documented Execution Times[E+,S+,D+]. The CORBA standard should specify that vendors must publish worst case delays for all execution of functions in their product.

 

Justification. This is necessary for realtime analysis and QOS guarantees.

 

 

R4.2.13 ORB Guarantees[E-,S*,D*]. If the client specifies that it wishes a guarantee for a certain QOS with timing constraints, the ORB should be able to either guarantee it, or raise the NO GUARANTEE exception.

 

Justification. This is needed for hard RT and for early detection of timing constraint violations. It is difficult to implement and could only be part of a "Hard CORBA/RT" standard.

 

 

R4.2.14 Server Specification of QOS parameters[E-,S,D*]. The CORBA/RT standard should allow for the specification by servers of their QOS parameters, including, but not limited to, worst case execution times of methods.

 

Justification: this is necessary for performance polymorphism and QOS so that clients and/or the ORB can determine if their is time to achieve the desired QOS.

 

 

R4.2.15 Dynamic Task Allocation to a Node[E-,S,D*]. The ORB should support a "smart" bind that directs a client's request to the node best able to satisfy its QOS parameters within timing constraints.

 

Justification. This requirement allows load balancing based on realtime criteria. If multiple servers are possible to satisfy the client's request, a RT ORB should be able to determine which is best using realtime criteria (such as the best QOS under timing constraints).

 

 

R4.2.16 Tailorability[E+,S*,D*]. The ORB and services should be constructed to support tailoring the ORB by "paring it down" and eliminating functionality and services.

 

Justification: To achieve fast-enough execution, it must be possible to by-pass several parts of the CORBA environment. For instance an embedded application would probably wish to by-pass the Dynamic Invocation Interface to achieve faster performance.

5 Why RealTime CORBA?

To create an open distributed environment that supports the object-oriented paradigm and is suitable to meet real-time requirements, we evaluated commercial standard environments for their suitability in meeting the realtime requirements described in the previous section. This section evaluates the only two really viable object-oriented software interoperability standards: CORBA and OLE.

 

Before describing our commercial standard evaluation for realtime, we first answer the question: why real-time middleware at all? Why not just use real-time operating systems, which are becoming mature and widely available? Or, why not just use fast networking? Or, why not use a commerical CORBA product running on a real-time operating system, like Iona’s ORBIX running on Lynx OS? The answer to these

questions can be summed up in the requirements for realtime distributed systems described earlier: the need to enforce end-to-end timing constraints. That is, although realtime operating systems, fast networks, and middleware that runs on these platforms are necessary components to meeting distributed

realtime requirements, they are not sufficient. Realtime constraints must be supported in all parts of the

end-to-end path of distributed computation. Thus, the middleware itself must have realtime features built into it. That is why we need to look at augmenting commercial standards, like CORBA, with realtime capabilities that they currently lack.

 

The remainder of this section examines CORBA and OLE and determines which distributed realtime features they do and do not have. It summarizes by recommending that CORBA is much better suited to augmentation with realtime support than is OLE.

OLE For Realtime.

OLE does not support timed method invocations, or synchronized clocks or preemptive priority-based scheduling. Neither does it have priority queues or guarantee bounded communications. Although all of these features could be implemented in one way or another, some of them might require substantial changes in the COM’s architecture.

 

The timed method invocations can be implemented by requiring each method on the interface to have a parameter to hold the timing information. The parameter type and name should be standard throughout the system for quick handling of the data.

 

The synchronized clocks between the system nodes can be achieved using one of many clock synchronization algorithms for distributed systems. It could operate as an ‘internal’ service, periodically making high priority RPC calls to various nodes in the system. On the other hand it may be sufficient to have one Global Time server, that everyone has to ask ‘what time is it?’ every time a time-stamp needs to be placed or compared - in this case the server must be running on a high throughput system capable of handling many calls simultaneously.

 

Although bounded communications are largely a hardware issue, on the architecture level the system must make sure it does not have any unbounded queues or heuristics that can postpone a method invocation for an unpredictable amount of time. In COM it is imperative to bound the time it takes to receive a pointer to the IUnknown interface of an object from the Naming server and later an interface pointer from the IUnkown itself. Once a client has a pointer to an interface, the system must guarantee a bounded execution of any method call on that interface.

 

The priority queues must be installed in the Naming server (registry); on every interface, stub and proxy; IStream and IStorage interfaces of Persistent Storage and all memory allocation and deallocation APIs. If a process becomes blocked (postponed because a lower priority task has a lock on a resource) for an unpredictable amount of time at any level, be it file access or memory allocation request, the whole system becomes unpredictable and no scheduler can guarantee any deadlines. At each point some priority ceiling protocols must be used to properly handle all lockable resources.

 

OLE Service Implementations.

OLE supports two services: Naming and Life Cycle. The first one is centralized and thus would need to have priority queuing added immediately, to guarantee preference for server naming requests to higher priority clients. The second would probably have to change to prevent servers from disappearing when their reference count goes to zero. It is potentially time wasteful and inefficient to shutdown and start up the same servers over and over again, especially if the clients that access them are periodic. There is an OLE API that forces a server to stay around even when it feels it is no longer needed - for real-time systems this should probably be a necessary configuration for all critical servers.

 

The Priority Assignment and Scheduling services could be implemented internally as APIs, as objects in the system or not physically implemented at all, depending on the type of system and scheduling desired.

 

The Concurrency Control service should be implemented such that each resource that needs to support concurrent access has a lock-management object associated with it. This object would be responsible for granting and releasing locks on the resource, tracking which clients hold which locks, and implementing priority inheritance when appropriate. The information about which processes are related to one another (needed to enforce transitive nature of priority inheritance) could be maintained on each node to help distribute the processing required by priority inheritance. This could be important in hard real-time systems with Zhao’s reservations [5].

 

To be able to influence locks depending on the temporal consistency of the data inside the objects, the Concurrency Control service must have some information about the age of the data in the objects. This information can be passed as a return value or in out/in-out parameters of the lock calls on the objects.

 

The Event service could be implemented by introducing the awareness of priority into the sink objects provided by COM for client/server event exchange (A sink object collects events that a client is interested in and that a server invokes when an event occurs).

 

OLE advantages for realtime:

It is a good idea to have some standard interface methods that can be used by any client universally. For example, a client would be able to ask any server about what services it provides and how fast it can do them. This information might lead clients themselves to decide what servers would work best for them. The methods could also be used to make scheduling decisions, where the Scheduling Service might be interested in the client load of all objects in the system, regardless of their designations.

 

COM’s ability to define multiple interfaces on objects may be used in real-time to achieve performance polymorphism, giving the clients an opportunity to choose which interface they want. On the other hand, the client may specify its deadline to the Unknown interface (the one that finds the interface pointers for the clients), that could do some extra work, calculating what interface would work better for the client under its current timing constraints and/or specified quality of service .

 

 

OLE disadvantages for realtime:

COM does not seem to perform any kind of object or resource locking, however we found a statement in the COM Specification that states that because COM deals with the operating system resources, such as memory, file system, etc., it expects the operating system to enforce whatever rules it has specified with respect to multiple access. With regard to this note, it is interesting to see how various systems running on different operating systems would be able to display a predictable kind of behavior. Hence, it is probably a good idea for the object architecture to specify universal locking, managed by the Concurrency Control service rather than the OSs.

 

COM specifies the file and memory management techniques, that now would have to adhere to the real-time database requirements. Realtime databases are difficult to build and many issues involving their design and implementation are still in research stages. At the time of this writing only a few attempts have been made to implement any RTDBs. Unless Microsoft is interested in writing a comprehensive real-time database behavior support for the Persistent Storage and memory management in COM, it is best that the use of these not be mandated of COM objects if COM is to have any chance of being used for real-time in the near future.

 

Because COM is built with transparency in mind, it mandates a significant number of standard interfaces and communications through its implementation. Although standardization may be good for compatibility, upgradability and application development, it adds increasing overheard to system resources and services that would have an adverse effect on performance critical to all real-time systems.

 

 

Although it is possible to implement realtime requirements in OLE/COM (DCOM), some would require substantial architectural changes that are costly and difficult to achieve. The architecture is best suited for commercial environments where timing constraints are not of significant importance.

CORBA for realtime.

CORBA 2.0 specifies a comprehensive set of services, including: Naming, Event, Life Cycle, Persistent Object, Transaction, Concurrency Control, Relationship and Externalization. As in the case of COM, to support real-time requirements some new services have to be added (see Section 2.5). All others have to be changed to support priority queues and become priority-conscious. Everything said about the services implementation in OLE analysis also applies here. Although CORBA specifies many more services, none of them support priority queuing, etc., and, hence, would need to be modified. It is our suspicion that some services are inherently too "massive" for any real-time applications. For example the Relationship Service currently has too much overhead to be useful in real-time. However, the good side is that CORBA does not mandate the use of any of its services, except perhaps Naming. Therefore, when adding real-time support, it is only necessary to make the Naming service support real-time features and then add/modify other services incrementally.

 

In this way, to get a basic distributed object architecture functionality for real-time applications we can concentrate on making the ORB real-time. The ORB does not seem to impose any inherent restrictions that might inhibit implementing any real-time requirements. In fact, it might be easier to implement the timed method invocation here than in COM. This is because in CORBA every method call is accompanied by an object called context that carries some system-related information, but that can also be extended to carry any additional data - for example the timing constraint information. Hence, the timing constraints may be added quickly by simply appending a few data items to the object structure.

 

Although the addition of timing constraint data to the CORBA method calls might be "easier" to implement, the question is whether it is in good object oriented style to use the context object, not initially designed for this type of information to carry this data. If we decide not to use this loophole, it is not a big stretch to the standard to either specify the timing constraint data as a part of the context object, or to require all method calls to carry an additional parameter specifically for this purpose via IDL and Static Invocation Interface (this is probably the only way to implement this in COM). The additional parameter would have to be added to each RT method call in its IDL specification. Regardless of the approach, the amount of data escorting the method call remains the same, only in the later case there is one extra parameter to marshall.

 

 

Although CORBA does not support some "nice" features like multi-interfacing, it does not have any inherent design characteristics that would hinder the implementation of real-time extensions. Hence, we believe it to be a good candidate for extension with real-time features, once the communications can be bounded on the hardware level.

Summary.

The examinationof both OLE/COM (DCOM) and CORBA architectures against the distributed realtime requirements detailed earlier in the report. COM has shown an innovative design, capable of supporting a number of useful, robust features, including: multi-version and multi-interface support, seamless communications, transparency-oriented client/server interactions and extensive memory and file system extensions, that might make it a good choice for many commercial environments. However, COM falls short of being able to easily incorporate real-time features because of imposing extensive (and excessive?) standards on all communications, memory access and file systems that would be hard to make real-time. Additional overhead for easy upgradability and extendibility is abundant, capable of imposing large delays in real-time environment. Extending this architecture with real-time requirements seems an excessively costly and difficult undertaking that might not lead to fruitful results. When compromises are made, additional overhead with "nice" features often looses in real-time, especially in hard realtime environments.

 

On the other hand, CORBA basic structure is primarily light-weight, because all the services except Naming are optional and the architecture does not impose regulations on object connectivity or interoperability. Hence, it stands a better chance from the design and implementational perspective alike to withstand an easy incorporation of real-time support.

 

The comparison is summarized by the following feature and functionality comparison of CORBA and OLE with respect to the set of CORBA’s specifications. Some features in OLE are annotated with speculations in regard to their possible implementations.

 

 

Notes on number features in the diagram

 

CORBA Core.

  1. OLE uses Component Object Model (COM) - underlying system software with binary interfaces for objects.
  2. Microsoft IDL (MIDL) compiler can take IDL definitions and generate the files necessary for the creation of objects.
  3. CORBA object may have only one DII, OLE object may have many DIIs. Interfaces are defined by interface identifiers which are created by running a function called CoCreateGuid.
  4. CORBA uses Interface Repository as a persistent storage for interface definitions. OLE employs the type library (class descriptions, interface descriptions, dispatch interfaces descriptions, module descriptions, and type definitions) and dynamic invocations.
  5. OLE has a registration process through which each interface is assigned a globally unique identifier (GUID). An object must be initialized using IClassFactory::CreateInstance function.
  6. COM enforces a Query Interface that allows clients to dynamically determine the servers’ interfaces. One can use the IDispatch interface to interpose the desired functionality for implementations with no static type information.

 

CORBA Language Bindings.

1-3. CORBA has defined language mappings for both a statically typed language such as C++, C and a dynamic typed language such as Smalltalk.

1-3. OLE currently provides support for implementing objects in C++, C. The OLE automation service makes easy for users to bind to other languages (there are examples of bindings to Visual Basic).

 

CORBA Interworking.

1-3. OLE uses Microsoft RPCs with COM’s APIs & DCE security protocol to connect among OLE environments.

 

 

CORBA Services.

  1. OLE currently supports Naming Service.
  2. Event service can be implemented defining outgoing interface(s) and specifying a SINK object that the server invokes when an event occurs.
  3. OLE supports Life Cycle Service.
  4. OLE standardizes structured storage, hence an object can choose to implement any of the standard interfaces: IPersistFile, IPersistStream, IPersistStorage to support it.

5-11.OLE does not provide any other Services, except perhaps the Query Service. This service can be implemented using QueryInterface method on IUnknown interface of COM objects.

 

CORBA Facilities.

CORBA facilities are in the process of standardization.

5.1 Why use CORBA at all in a distributed realtime application?

 

 

5.2 Why not just an ORB on a reral-time operating system? Why not just a fast network? There are several implementations that claim to be real-time ORBs by virtue of residing on real-time operating systems (e.g. OS that are compliant with the real-time POSIX standards). Again, residing on a RT OS may be necessary for a RT CORBA, but it alone is not sufficient. In particular, RT OS's do not address the enforcement of end-to-end timing constraints across the distributed environment. Similarly, real-time or fast communication may be necessary for RT CORBA, but certainly is not sufficient.

 

Impact on CORBA.

This section discusses the impact that real-time requirements might have on the CORBA standard. The discussion is speculative nd is designed to illustrate the issues and propose some areas of real-time modifications to CORBA. These are not final proposals, but instead are meant to outline topics to facilitate development of the modifications to CORBA.

 

6.1 Scope of Modifications

 

A primary question in the development of real-time extensions to CORBA is: shall there be one Real-Time CORBA standard? Or, shall there be several Real-Time CORBA standards (e.g. one for Hard RT CORBA,

one for Soft RT CORBA, one for Fast CORBA)? Shall there be a single all-encompassing standard and the profiles (as in POSIX)? There are two reasons to ask these questions. First, a complete real-time standard that handles hard, soft, and fast requirements may be too extensive for feasible vendor implementations. Second, some requirements from the hard, soft, and fast domains may conflict with each other. For instance, a Fast CORBA may seek to increase run-time efficiency by not having an involved scheduling policy in the ORB; whereas hard and soft real-time seem to dictate such a scheduling policy.

 

In this paper we are assuming a single Real-Time CORBA standard to allow us to centrally specify requirements and implications on the standard. Profiling from this standard is possible to achieve a standard that supports soft real-time, hard real-time, fast execution, or application-specific requirements.

 

Another question that needs to be addressed is whether the CORBA environment is "open" or "closed". Typically CORBA mandates an open environment where clients and servers can be transparently added or subtracted from the system. The general benefits of an open environment are still applicable in a real-time application. However, a hard real-time application, where a priori prediction of timing behavior is required, may have to be "closed". That is, hard real-time may require that all clients and servers be fixed before execution so that it is at least possible to analyze timing behavior before execution. Thus, although the general CORBA philosophy is to provide an open environment.

 

In this paper, we assume an open environment. We leave closing the environment as a possibility in a hard realtime profile.

 

Suggested Enhancements to the CORBA Standard

 

The following are possible modification areas to the standard. They are terse suggestions that will be fully worked out in the eventual standard.

 

M6.1 The RT CORBA standard shall specify that it assumes an operating environment that meets the requirements outlined in Section 4.1.

 

M6.2 The RT CORBA standard shall encourage modularity to facilitate removing features in order to meet hard real-time or fast application requirements.

 

M6.3 The RT CORBA standard shall include a Global Priority Service. The standard must specify the interface to this service, not how priority is assigned. The RT CORBA standard shall specify that all requests in the CORBA environment specify their global priority and that this priority is used in all scheduling and queueing in the environment.

 

The interface should be flexible enough to allow the implementations of many different priority assignments. For instance, it should accept a deadline and/or a period so that EDF and/or Rate Monotonic scheduling (respectively) can be used. It might also accept QOS parameters and an "importance" parameter. This modification addresses requirements R4.2.4 and R4.2.5.

 

M6.4 Real-Time Event Service. The current CORBA 2.0 event service shall be modified to accomadate real-time events. Real-Time events deliver with them the absolute time that the event occurred. This time must suitable for specifying timing constraints.

 

M6.5 Global Time Service. The RT CORBA standard shall specify modifications to the eventual CORBA Time Service to make that general service suitable for real-time.

 

[I have not had time to flush out the following points completely. They are listed and need to be fully described, like M6.1, M6.2, and M6.3 above.]

 

M6.6 Priority Inheritance.

- Specify how priority inheritance is used in the ORB and servers. See Requirement R4.2.7

 

M6.7 Real-Time Concurrency Control Service.

- the current CORBA 2.0 CC service in inadequate for RT. Specify how it should be changed. At least include priority inheritance.

 

M6.8 Real-Time Guarantee Service.

- RT CORBA should specify the interface to a service that provides a priori, real-time, QOS guarantees. See Requirement R4.2.13

 

M6.9 Real-Time Exceptions.

- See Requirement R4.2.9

 

M6.11 Time Type.

- Specify the time type to be added as a standard type to CORBA.

 

M6.10 Real-Time Environment Parameter.

- A parameter, passed with all method invocations, that specifies real-time and QOS constraints. The whitepaper should list what this environment should contain. See Requirement R4.2.2

 

M6.11 ORB Controlled Performance Polymorphism.

- Spec on how ORB should support Performance Polymorphism and QOS. - Perhaps ORB-selection of appropriate polymorphic method from server based on time and QOS. See Requirement R4.2.8

 

M6.12 ORB API For Protocols.

- See Requirement R4.2.10

 

 

M6.13 Standard Multi-threading.

- standard multi-threading interface. See Requirement R4.2.11

 

 

M6.14 Real-Time Binding

- load balancing under RT criteria.

 

SUMMARY

 

 

CONCLUSIONS

 

CORBA Introduction Strategies

Mandated or new components. By using the CORBA IDL compiler to generate the public API component interfaces language neutrality and public naming conventions are strictly enforced.

As Wrapper for maintenance upgrades. CORBA object technology can be introduced as a wrapper for the DII-COE segment element. The benefit provided by the CORBA wrapper is to define the distributed interface for cross platform use of the segments.

Benefits

Trusted Software Components. The Idea behind Objects has been to construct trusted software building blocks. These blocks can be independently developed and tested and then placed into play as black boxes when needed. CORBA as a distributed object manager conforms to this goal.

Software runs where it fits best. COBRA extends the DII-COE concept of segments into objects which may be instantiated on a machine where-ever it exists in the network. You can refer to the software and it need not be installed on your machine. COE refers to this kind of implementation for DBMS where CORBA makes it available for all registered components.

Language Neutrality. The IDL translates a language neutral definition of the interface into language specific elements representing the definition requested but allowing a language specific implementation.

Component Re-Use. CORBA supports the DII-COE goal of Component reuse and adds object oriented techniques of Inheritance and Polymorphism as enhancements for Component Re-Use.

Better System Serviceability. In conformance with DII-COE goals CORBA compliant objects are easily distributed and registered with the ORB Name service identifying the location for common segment usage of the distributed objects.

Cost Savings. Smaller components reduce maintenance costs and integration costs.

 

 

RECOMMENDATIONS

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

APPENDIX A

Glossary

 

 

 

 

Abort. The database transaction operation that a program uses to indicate that the transaction it is executing should be terminated abnormally and its effects should be obliterated.

 

API - see Application Program Interface.

 

Application program interface. A definition of the syntax and semantics that allows an application written in a host programming language to access or control an Object Data Management (ODM) system.

 

ATM - Asynchronous Transfer Mode. A network protocol that allows for packet-switching which follow virtual circuits on the network. A virtual circuit is an a priori fixed path which packets follow.

 

ATM switch. A device that performs packet switching on an ATM network.

 

Atomic operation. An operation which terminates consistently, meaning that it either performs all of its work or performs none of its work. An indivisible

unit of work.

Attribute. Object information manipulated by get and set operations, defined as a visible part of the state of an object. Also called property or instance variable.

Autonomous database systems. Database systems "under separate and independent control." Such systems may exhibit several types of autonomy, including design autonomy (the ability of a database system to choose its own design with respect to any matter, such as data being managed, representation and naming of data, etc.), communication autonomy (the ability of a DBMS to decide whether to communicate with other DBMSs), execution autonomy (the ability of a DBMS to execute local operations without interference from external operations), and association autonomy (the ability of a DBS to decide whether and how much to share its functionality and data).

 

Base class. A class from which one or more other classes are inherited. Also called a superclass.

 

Behavior. Perceived realization of actions.

Binding. The selection of a method to be executed in response to an operation request. Binding occurs either at compile time (early binding), or at run time (late binding).

Binary large objects. Long, variable-length sequences of bits or bytes used to represent non-conventional data, such as graphics, image, audio, and video objects.

 

BLOB. see Binary Large Object.

 

Blocking. The act of a higher-priority task waiting for a lower-priority task due to concurrency control.

 

Checkin/checkout. A concurrency control mechanism in which objects are copied from one database or reserved for use in an another database (checked out), manipulated, then copied back to the database or released (checked in).

Class. A rule for building occurrences and the collection of these occurrences. Corresponds to the collection of rows in an RDBMS table.

 

Class library. A grouping of classes based on some relationship to or dependence on one another. Together, the classes usually implement some family of abstractions.

 

Commercial-off-the-shelf. Components that can be obtained from multiple commercial vendors. Non-proprietary.

Commit. The transaction operation that a program uses to indicate that the transaction it is executing should terminate normally and its effects should be made permanent.

 

Compatible. In the context of concurrency control, this means that two operations may execute concurrently or that two locks may be heldconcurrently.

 

Compatibility function. A characteristic of realtime object-based concurrency control where the object designer specifies, as a function, the conditions under which methods of the object may execute concurrently.

 

Composite objects. Objects composed of other objects.

Conceptual schema. A description of the conceptual or logical data structures and the relationships among those structures. The description is given in terms of a data model, such as a relational, network, object or hierarchical model.

 

Concurrency control correctness criteria. The concurrency control correctness criteria establishes the allowable interleaving of concurrent execution. Serializability is an example of a typical concurrency control correctness criteria.

Concurrency control. Mechanisms that control simultaneous sharing of objects among processes.

Concurrency control protocol. A set of rules or conventions for controlling simultaneous access to objects.

 

CORBA. Common Object Request Broker Architecture. This is the object model and interface specification for the Object Management Group's (OMG) standard distributed computing environment. See also IDL, ORB, OMG.

 

CORBA client. An application program, written in a popular programming language, that transparently uses CORBA services and services provided by servers in a CORBA distributed system.

 

CORBA server. A computer program that accepts requests from CORBA clients over a CORBA ORB, performs some requested computation, and returns results to the CORBA client over the ORB. Servers may

be legacy systems (see "legacy system" in a CORBA wrapper (see "wrapper").

 

CORBA Service. A service provided by the CORBA vendors as part of the CORBA runtime environment (as opposed to application-provided servers). the CORBA 2.0 standard calls for the following CORBA services: Naming, Events, LifeCycle, Externalization, Transaction, Concurrency Control, Persistence, Relationships.

 

CORBA wrapper. see "Wrapper".

 

Create. An operation that establishes the existence of a new object.

 

COTS - see Commercial-Off-The-Shelf.

 

Criticality. A measure of the relative importance of an execution or data item to the system operation.

 

Database Administrator (DBA). The person or persons responsible for the overall control of the database system at a technical level. DBA functions include: defining the conceptual schema, defining the internal schema, serving as a liaison to users, defining security and integrity checks, defining backup and recovery procedures, and monitoring performance and responding to changing requirements.

Data definition language (DDL). Language used to define or declare database objects (records, tables, columns, etc.). Includes statements to define database schema, which are stored in the database as metadata. Also includes statements to define auxiliary storage structures such as indices.

 

Data manipulation language (DML). Language used to manipulate or process database objects. Includes statements to retrieve, modify, insert, and delete data from the database.

Deadline. A deadline is an absolute (wall-clock) time constraining the end of a time interval.

 

Dependency. The reliance of one object upon another object for some behavior.

Derived class. A class that inherits from one or more other classes. Also called a subclass. See specialization, base class.

Destroy. The operation of terminating the existence of an object.

 

Directive. A realtime SQL language statement that indicates to the runtime information that is necessary for making the execution predictable (e.g. bounds of table sizes, pinning a table to main memory).

 

Distributed database. (1) [Ozsu and Valduriez 91] "A distributed database is a collection of multiple, logically interrelated databases distributed over a network." (2) [Ceri and Pelegatti 84] "A distributed database is a collection of data which are distributed over different computers of a computer network. Each site of the network has autonomous processing capability and can perform local applications. Each site also participates in the execution of at least one global application, which requires accessing data at several sites using a communication subsystem." Distribution. (1) The action or process of placing or positioning an object in two or more places. (2) The action or process of placing information or components of a computer-based system in separate address spaces or computer systems.

Encapsulation. Hiding representation and implementation details in order to enforce a clean separation between the external interface of an object and its internal implementation.

Extensibility. (1) The ability to add new classes and subclasses to an existing schema. (2) The ability to add new attributes, methods, superclasses, etc. to a class. (3) The ability for existing instances to acquire or lose types.

 

Event Service - A CORBA service where clients can determine, from the CORBA runtime service, if certain named event have occurred.

 

Externalization Service - A CORBA service that places CORBA objects into a common format for exchange.

External schema. Sometimes referred to as a view. Describes a subset of the database (defined by the conceptual schema) that may be accessed by a user or a class of users. The description may be given in

terms of a data model different from the one used at the conceptual level. The description may include additional access control information and integrity constraints.

 

Equality. A comparison operation whose definition may be class-dependent or system dependent.

 

Firm realtime. Firm realtime means that failure to execute within timing constraints produces no useful results.

 

Generalization. The action or process of deriving from many objects a concept r principle that is applicable to all the objects. A base class (superclass) is a generalization of its derived classes (subclasses).

 

Global transaction. A transaction whose work may occur across heterogeneous resource managers, or data servers. (Also referred to as distributedransaction, where distributed refers to distribution across data servers and not necessarily across computer systems.)

 

Hard realtime. Hard realtime means that failure to execute within timing constraints produces catastrophic results.

 

Heterogeneity. A quality of an Object Data Management (ODM) system which allows it to execute in different hardware and software environments, including different machines, operating systems, etc.

 

Heterogeneous. An aggregate is heterogeneous if the objects it contains are instances of different classes and types.

Heterogeneous database systems. Database systems that vary with respect to DBMS (i.e., vendor, version, etc.), data model, query language, and/or data definition (i.e., names, value types, data structures).

Hierarchical data model. A model in which records are related in a tree tructure. Relationships between records are indicated by pointers.

Homogeneous. An aggregate is homogeneous if the objects it contains are instances of the same class or type.

Identity. A characteristic of an object which provides a means to uniquely denote or refer to the object independent of its state or behavior.

Identity compare. An operation which determines whether two references refer to the same object.

 

IDL - see Interface Design Language.

Implementation independence. A characteristic of an object which allows for ts interface to be independent of its underlying implementation.

 

Importance. see "Criticality".

Inheritance. Deriving new definitions from existing ones.

 

Instance. An occurrence of a class or type.

Instance evolution. The process of making existing instances consistent with modified class definitions.

 

Integrity constraint. Typically, a predicate that states a condition that must hold for a system or object to be in a legal state or that defines legal state transitions.

 

Intelligent Wrapper. A proposed DHDA idea where CORBA wrappers (see "wrapper") include intelligence to route CORBA requests to servers that are managed by the intelligent wrapper.

Interactive query. Query language statement issued by an interactive user.

Interface. The operations in which an object can participate.

 

Interface Design Language - A neutral language which declares the interface of servers. In CORBA, the IDL is a subset of C++ which includes the specification of methods and attributes of the server.

 

Internal schema (or Logical Structure). Internal logical divisions of a database, used to control space allocations, set space quotas for users, control availability of data, backup or recover data, allocate data across devices for performance reasons, etc.

 

Inter-ORB interoperability. A part of the CORBA 2.0 standard that allows ORBs from different vendors to inter-operate (e.g. clients on one ORB requesting service from a server on another ORB).

 

Key. Attribute, or set of attributes, that uniquely identifies an object ithin a class (in the relational model, a column or set of columns that uniquely identifies a row within a table).

 

Legacy System - An existing computer system or program. Typically used in the context of integrating the legacy system with other computer software.

 

LifeCycle. The period of time during which an object exists. A CORBA 2.0 Service which involves creation and destruction of CORBA objects.

 

Lock. A security or concurrency mechanism often used to control the access to and sharing of an object.

 

Logical consistency. Logical consistency requires that data meet integrity constraints (typical database consistency constraints).

 

Logical imprecision. Logical imprecision measures the degree of logical inconsistency of data.

Long transaction (i.e., long-lived transaction, long-duration transaction). A transaction representing a computation that may take up to hours, days, or even longer to complete. Such transactions are incompatible with conventional transaction concurrency control and recovery control policies and mechanisms.

 

Main memory database. A database where specified data is kept in main memory.

 

Messaging. The method by which one object requests another object to perform operations. There is no real relational equivalent.

 

Method. A predefined procedure that is stored in an object and that determines all interaction between the user and the database. The relational counterpart to methods are stored procedures.

 

Method invocation. The calling of a method and then acceptance of its return values.

 

Metadata. See schema.

Naming Service - a CORBA service in which the ORB provides to a client a reference to a (remote) CORBA object.

Nested transaction. A tree of transactions, with the root of the tree being the top-level transaction, and the other transactions being subtransactions.

(The term nested transaction is also used to refer to a subtransaction of some nested transaction.). Nested transactions can be used as mechanism for facilitating recovery control in long transactions.

Non realtime. Non realtime means that there are no timing constraints.

 

Object. An instance of a class or type.

Object Identifier. A unique identifier assigned to the object.

 

Object language (OL). A language used to specify and manipulate class definitions and object instances.

Object model. A model which supports encapsulation, object identity, types, classes, behavior, inheritance, and instances.

 

Object Management Group - A consortium of over 500 software vendors, users, and researchers that meet regularly to agree upon and publish the CORBA standard.

 

Object-oriented database management system. Also referred to as object data management (ODM) system. A DBMS characterized by the following advanced functionality: "object identity; object references based on object identifiers; BLOB-valued attributes (where a BLOB is a binary large object); collection-valued attributes including sets, lists, and arrays of literal or reference values; composite objects and operations; type hierarchies with inheritance; procedures used for encapsulation and active data; more powerful

query languages integrated with programming languages; versions; and new transaction mechanisms."

 

Object Request Broker (ORB) - The CORBA entity that models the distributed runtime environment. The ORB provides standardized communication and services to CORBA clients and servers. Essentially it enables transparent client/server interaction as well as providing system-level services to clients or servers.

 

OMG. see Object Management Group.

 

OO. Object-Oriented.

 

Operation. Something which can be applied to one or more objects in a request.

 

Open system. A system where as many components as possible are Commercial-off-the-shelf and as many components as possible have interfaces that adhere to a widely accepted standard.

 

Open Object-Oriented Database. An OO database system from Texas Instruments, funded by DARPA, which is designed with modular components, and has freely available source code.

 

ORB. see Object Request Broker.

Period. A period establishes regular time intervals of a constant relative time duration where the start of the i-th interval is the end of the (i-1)st interval. A periodic constraint requires that execution appear once and only once within every generated period.

Persistent object. An object which exists longer than the process that created it. Persistence is a CORBA 2.0 service in which a client can request that a certain object maintain its state permanently.

 

Polymorphism. A form of binding in which the same operation can bind to different implementations when sent to different objects.

 

POSIX. IEEE standard for operating system interfaces. POSIX.1 is essentially Unix. POSIX.1B (formerly POSIX.4) specifies realtime features for the operating system.

 

Priority. An integer number assigned to a task or data indicating its relative importance for scheduling. Note that priority differs from Importance/Criticality in that Importance is an application-specified characteristic of the task, where priority is assigned by the system to use in scheduling. Priority may be derived from Importance, but priority may depend on other factors, such as deadlines or periods.

 

Priority ceiling (of a lock L). The highest priority of a task that requests a lock which conflicts with lock L. Used in realtime concurrency control.

 

Priority inheritance. A lower-priority task assuming the priority of a higher-priority task which the lower-priority task blocks.

 

Priority inversion. A lower-priority task blocking a higher priority task.

 

Priority scheduling. A scheduling policy where the highest priority ready task is always executed. Typically this involves preempting a lower priority task, if preemption is allowed.

Property. See Attribute.

Protocol. See Interface.

 

Query language. Language generally regarded as comprising of a DML, DDL, and DCL. SQL is the ANSI and International standard query language.

Query. A retrieval from the database. In the

broad sense, the term refers to any DML statement or even any query language statement.

 

Realtime. Execution and data must meet timing constraints to be correct. Correctness is further defined, see "hard realtime", "firm reatime", "soft realtime".

 

Realtime CORBA. A CORBA environment that supports time-constrained method invocations.

 

Realtime Database Management System. A realtime database management system is a DBMS that manages time-constrained data and time-constrained transactions.

 

Realtime SQL. A proposed extension to the ANSI/ISO standard SQL database query language. Realtime SQL extends SQL with time-constrained queries, time-constrained data, and directives.

Recipient. A distinguished argument of an operation which is the receiver of the operation.

Recovery. The process of reproducing a consistent state of a system after a failure.

 

Reference (to an object). A data structure that is suitable to specify a particular object so that the object may be accessed.

 

Relational data model. A table-based model. All data is held in tables, with each table having a fixed number of columns and a variable number of rows. A row corresponds to a record, and columns correspond to fields. Relationships between rows in different tables are indicated by shared column values.

 

Relationship. An association among two or more objects. If the relationship is between exactly two objects, it is termed binary. If the relationship is among three or more objects, it is termed n-ary. Relationships may define direction and may be unidirectional or bidirectional. A cardinality (allowed number of objects) may be specified for the relationship.

Remote database access (RDA). Access to "databases stored on a machine different from the one where the user or application is executing. Remote access is almost universally implemented in the same way. A small part of the DBMS resides with the application program on the client machine. This portion is responsible for encoding queries, sending them over a network to the server machine where the remainder of the DBMS resides, and receiving any records returned over the network to decode and pass to the client application program".

 

Replication. The process of making copies of an object in more than one location or space, each of which are kept consistent (i.e., all copies will respond identically to identical requests).

Request. The application of an operation to one or more objects.

 

Scheduling. Scheduling refers to controlling access to all system resources and includes concurrency control of data objects, CPU time-sharing, and allocation of memory and devices.

 

Schedulability Analysis. An analytical procedure to determine if a set of realtime tasks can meet their timing constraints.

 

Schema. The collection of definitions of types, classes, and operations.

Schema evolution. The process of altering the schema.

 

Semantic concurrency control. Concurrency control where conflict among operations is defined by the system designer using application information.

 

Shared memory. A feature of the POSIX.1B compliant operating systems that allow process to share memory.

Side effect. A state change (or update).

Signature. The specification of the number and types of an operation's arguments and results.

Soft realtime. Soft realtime means that failure to execute within timing constraints produces less desirable results than would be produced by meeting timing constraints.

 

Specialization. The action or process of adding a concept, principle, or operation to a class or type that is more specific or particular than other similar classes or types.

State. The information that must be remembered when a request alters the future behavior of other requests.

 

Start event. A start event is an occurrence in the system (including, but not limited to, active database triggers and reaching an absolute time) constraining the start of a time interval.

Subclass. A class that inherits one or more other classes. Also called a derived class.

Superclass. A class inherited by one or more other classes. Also called a base class.

Synchronization protocol. The protocol that processes must follow to access an object concurrently.

Temporal consistency. Temporal consistency requires that data meet timing constraints.

 

Temporal imprecision. Temporal imprecision measures the degree of temporal inconsistency of data.

 

Timing constraints. Timing constraints refer to all or some of the following: start event, deadline, and periodic constraints.

 

Transaction. A series of query language statements representing a logical unit of work with the following "ACID" properties:

 

Atomicity. A transaction is atomic; it is performed in its entirety or not at all.

Consistency. A transaction is a correct transformation of the database state. The transaction, as a whole, does not cause integrity constraints to be violated.

 

Isolation. The intermediate results of a transaction are isolated from other concurrent transactions. That is, transactions are synchronized so that they are serializable: the effects of concurrently executing transactions are the same as if they were executed serially, in some order.

 

Durability. Once a transaction successfully completes (commits), its changes to the database state survive failures.

 

Transient object. An object that lives no longer than the execution of the process that created it.

 

Transparency. The concept of hiding some or all details of an operation or behavior due to the operation being able to perform the operation without requiring the explicit details. Several examples are persistence transparency, syntactic transparency, fragmentation transparency, location transparency, replication transparency, execution transparency. While details may be hidden, there is often the need to control the orthogonal behavior (turn it off or on, specify some policy like eager or lazy, etc.).

Trigger. A mechanism through which an action to be performed upon the occurrence of a given event (e.g., access to a given object) can be specified. The action typically involves database updates.

 

Type. The specification of a protocol (interface) and a collection of objects to which the protocol applies.

 

Unplanned (or ad hoc) query. A query language statement (usually a DML statement) for which the need was not foreseen.

 

Update. A DML statement that modifies a database object.

Version. A mechanism that can be used for concurrency control, recovery control, and configuration management. New versions of objects can be created explicitly or implicitly (for example, when an object is modified). Version histories of objects can be maintained and examined. Version histories can include branches (two users working on a given object in parallel for different reasons) and merges (the work of the two users is combined into a merged version). Obsolete versions can be destroyed or archived.

Wrapper. Software that allows a legacy or native COTS sub-system to participate in an distributed environment. A CORBA wrapper allows the sub-system to participate in a CORBA environment by making the sub-system's interface be that of a CORBA server

 

CORBA Terms

API -- Application Program Interface - The way a programmer requests services from other software components. The rule is that more API(s) defined and involved, the more costly (integration and maintenance) the system.

BOA -- Basic Object Adapter - The interface between the Object Implementation and the ORB. The binding of the Object providing the service and the backbone bus connecting it to the service user.

CORBA -- Common Object Requester Broker Architecture

ORB -- Object Requester Broker - The logical information bus connecting the requester and server portions of a transaction. Under CORBA specification 2.0 this may be implemented by a standard set of API wrapping a OSF DCE implementation.

COSE -- Common Open Software Environment -- X/Open specification. Almost Identical to DII-COE concept.

COSS -- Common Object Service Specifications -- OMG specification.

DCE -- Distributed Computing Environment - The OSF API to support distributed computing in a structured environment. DCE has an IDL different from the CORBA IDL in that it supports remote procedure calls from the "C" language. It is therefore not language neutral nor is it extensible using object technology.

IDL -- Interface Definition Language - This is a meta language designed to define API. CORBA provides for a translator of IDL into implementation language specific mappings while retaining the language neutrality of the API definitions.

OMG -- Object Management Group - The industry consortium defining and controlling the CORBA specification. The OMG is the worlds largest software consortium with over 600 member organizations world wide.

OSF -- Open Software Foundation - The industry consortium concerned with providing cross platform compatibility.

UNO -- Universal Networked Objects - OMG Document 95.3.1 -- This specification provides for the interoperability between different ORB vendor implementations allowing a heterogeneous environment of CORBA 2.0 compliant software implementations.

 

 

 

 

 

 

 

 

 

 

 

 

APPENDIX B

OMG Membership

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

APPENDIX C

Application Examples