myavr.info Personal Growth Transaction Processing Concepts And Techniques Pdf

TRANSACTION PROCESSING CONCEPTS AND TECHNIQUES PDF

Thursday, September 12, 2019


Download PDF Transaction Processing: Concepts and Techniques (The Morgan Kaufmann Series in Data Management Systems) Ebook by. Processing Concepts and Theory a transaction includes read_item and write_item operations to examining the log and using one of the recovery methods. Transaction Processing - 1st Edition - ISBN: , DRM-free (EPub , PDF, Mobi). × Read this Transaction Processing: Concepts and Techniques.


Author:SANFORD MCDARIS
Language:English, Spanish, German
Country:Egypt
Genre:Biography
Pages:387
Published (Last):17.04.2015
ISBN:364-6-27418-207-9
ePub File Size:22.54 MB
PDF File Size:16.67 MB
Distribution:Free* [*Regsitration Required]
Downloads:24422
Uploaded by: DIEDRE

transaction processing concepts and techniques transaction processing concepts and pdf. Description. Transaction processing is designed to maintain a. Transaction processing techniques are deeply ingrained in the fields of databases and operating Файл формата pdf; размером 58,70 МБ. Transaction processing system usually allow multiple transactions to run .. The numbers of recovery techniques that are based on the atomicity property of.

Upcoming SlideShare. Like this presentation? Why not share! An annual anal Embed Size px. Start on. Show related SlideShares at end. WordPress Shortcode. Published in: Full Name Comment goes here. Are you sure you want to Yes No. Be the first to like this. No Downloads. Views Total views.

Online transaction processing

In Figure 1, there are standards for other aspects of collaboration, e. Section 2 gives an overview of business- aware transaction models. It is described which advanced transaction models were developed before the emergence of SOC followed by a description of research projects about business transactions. Next, Section 2 discuses the challenges and features of business transactions, followed by a conceptual proposal of how the management of business-aware transaction models could be managed.

Section 3 discusses industry initiatives for managing electronic business transactions. These initiatives are candidates for becoming standards and Section 3 concludes with a comparison of the initiatives.

Section 4 comprises transactions in grid computing and presents different research groups that investigate grid transactions. Section 5 presents conceptual and technical transaction frameworks that integrate several transaction models.

Finally, Section 6 concludes this document and maps out future research requirements for the domain of business transactions. Business-Aware Transaction Models With the emergence of computer systems, the manual registration of business transactions may be automated. Information systems in organizations are used for processing, executing and coordinating technical-level transactions. However, an e-business transaction eBT concept must go further than transactions for the database domain.

An eBT needs to reflect the operational business semantics in terms of business needs and the objectives between collaborating parties, e. Hence, eBTs are long lived, involve collaboration at multiple levels, are characterized by unconventional behavioural features, and include multiple parties that exchange services for compensation.

The successful completion of an eBT results in consistent state changes that reflect the objectives of multi-party business collaboration. Transactions before service-oriented computing To traditionally manage data in a sound way, transactions must fulfil the following requirements.

You might also like: PDC BY ANAND KUMAR PDF

Atomicity states a transaction executes completely or not at all, consistency means a transaction preserves the internal consistency of the database, isolation infers a transaction executes as it were running alone with no other transactions, and finally, durability demands the results of a transaction are not be lost in a failure.

These generally known ACID properties are instrumental for handling exceptions in transaction management that are shielded from applications running on top of databases. From a technical standpoint, current web service composition approaches are confronted with the transactional challenges of relaxed atomicity, where intermediate results may be kept without rollback despite the failure to complete the overall execution of a composite service.

Advanced transaction models are extensions to flat transactions that release one or more ACID constraints to meet with specific requirements. Two strategies have been adopted for extension purpose to achieve different structures inside a transaction. Hence, a large transaction is divided into smaller components, which can in turn be decomposed, e. By splitting the long processing time, each transaction is divided into a sequential series of smaller components that are operated in a shorter time.

Flat transactions still dominate the database world because of their simple structures and easily implemented ACID properties. Save points and Check points Advanced transaction models that existed already before the age of SOC, divide a transaction into sub-transactions according to the semantics of the applications. These sub-transactions can also be further divided.

An advanced transaction is capable of performing complex and longer-lasting tasks like restarting from the middle of the transaction instead of the very beginning in case of a failure occurrence. A supporting concept is the mechanism of save points [1] that enables a transaction roll back to an intermediate state for recovery. Distributed and Nested Transactions The save point mechanism results in advanced transaction models, i.

These models are application specific and each of them addresses the need of a given situation. For example, a distributed transaction is needed if an organization must integrate several database systems that reside on different servers. A nested transaction is suitable for complex-structured applications and a chained transaction is appropriate for a time-consuming application with long-lasting transaction processes.

A chained transaction is a variation of save points, while the nested transaction is a generalization of save points [26]. Distributed transactions consist of sub-transactions that may access multiple local database systems.

The model defines two types of transactions: local transactions and global ones. Local transactions are executed under the control of the local database management system DBMS , while the multi-database system MDBS is in charge of global transactions. Hence, local and global integrity constraints are aligned. Also the global atomicity and isolation is managed when the whole transaction is aborted if any sub-transaction fails.

The 2PC protocol enables cooperation between multiple applications in distributed systems. During the second phase the transaction is committed if all participants agree and the original state is updated with the stored changes. The two phases are executed by the transaction coordinator. If one participant refuses, the entire transaction is rolled back.

While distributed transaction use a bottom-up approach to divide transactions into sub-transactions from a geographical point of view, nested transactions adopt a top-down method to decompose a complex transaction into sub-transactions or child transactions according to their functionalities [44]. In nested transactions it is possible for parts of a transaction to fail without necessarily 7 aborting the entire transaction.

Sub-transactions are composed in a hierarchical manner and only the leaf-level sub-transactions perform database operations while others function as coordinators. Each sub-transaction is atomic and can abort independently regardless of its parent and siblings with results as if it had not executed.

When it aborts, the parent may trigger another sub-transaction as an alternative. If the aborted sub-transaction makes the database inconsistent, the whole nested transaction still meets consistency requirements. Multilevel transactions that are also called layered transactions [61] and their generalization, open nested transactions, are based on the idea of nested transactions [38]. Multilevel transaction are a variation of a nested transaction where a transaction tree has its levels corresponding to the layers of the underlying system architecture.

These transactions employ a pre-commit concept that allows an early commitment of a sub-transaction before the root transaction actually commits, which requires a sub-transaction to semantically undo the committed one. Multilevel transactions evolve to open nested transactions if the structure of the transaction tree is no longer restricted to layering, i.

Open nested transactions relax the ACID properties compared to the nested transactions that guarantee global level isolation, which means the intermediate results of committed sub-transactions in nested transactions are invisible to other concurrently executing ones. Open nested transactions relax the isolation property in the global level to achieve a higher level of concurrency.

Chained Transactions and Sagas The nested transaction and its extensions are only fit for some specific environments like federated databases. However, they are not suitable for environments requiring long-lived transactions, which is the reason why the idea of chained transactions was adopted by decomposing a long running transaction into small, sequentially-executing sub-transactions [26].

The chained transaction is a variation of the save point mechanism. A sub-transaction in the chain roughly corresponds to a save point interval. However, the essential difference is that each sub- transaction itself is atomic, while each interval between every two save points is only part of an atomic transaction. In the chain, a sub-transaction triggers the next upon commit until the whole chained transactions commit. When encountering a failure, the previously committed sub- transactions have already durably changed the database so that only the results of the currently executing sub-transaction are lost.

This way the rollback only returns the system to the beginning of the most recently-executing sub-transaction. From the application perspective, the atomicity and isolation properties are no longer guaranteed by the whole chain. For example, in the middle of execution, all the committed sub-transactions cannot be undone, which leads to a problem in aborting the whole chain.

Another case is that other concurrent transactions can see the intermediate results generated during the execution of the chain. Sagas [25, 19] are based on the idea of chained transactions where the first include a compensation mechanism to roll back. Sagas divide a long lasting transaction into sequentially executed sub- transactions and each sub-transaction, except the last one, has a corresponding compensating sub- transaction.

All these sub-transactions are atomic with ACID properties. When any failure arises, the committed sub-transactions are undone by those compensating sub-transactions. Unlike the non- atomic chained transactions that cannot undo the committed sub-transactions in the case of an abort, sagas can use compensating sub-transactions to return the whole transaction back to the very 8 beginning. Sagas preserve application-dependent atomicity. Similar to chained transactions, a saga transaction may be interleaved with other concurrent transactions, thus isolation is not guaranteed.

Research projects and transactions Workflow-oriented research projects resulted in specific transaction models for ensuring the reliability of automated business processes. A workflow process may involve database transactions that apply ACID properties.

Similar to the decomposition mechanism of advanced transaction models, a workflow process can be modelled by decomposition into some sub-processes in a hierarchical or sequential way. Below, the latest relevant workflow-oriented research projects on transactions are presented. The model combines the concept of safe point with Sagas [25] so that more flexibility is offered in compensation paths in case of exceptions. The bottom layer consists of local transactions with a nested structure that conform to the ACID properties [10].

The upper layer is based on Sagas that roll back the completed sub-transactions using the compensation mechanism, thus, relaxing the requirement of atomicity [57]. The semantics of the upper layer is formalized using simple set and graph theory [28].

The local transaction layer is designed to model low-level, short-living business processes, whilst the global transaction models high-level long-living business processes. The CrossFlow project proposes a contracted service outsourcing paradigm in which the X-transaction model offers support for inter-organizational workflows.

In contrast, the WIDE transactional model caters for intra-organizational workflows. The X-transaction model is a three-level, compensation based transaction model to support cross- organizational workflow management. The three levels in this model are the outsourcing level, the contract level and the internal level, each with a different visibility to the consumer or the provider organization.

The X-transaction model views an entire workflow process as a transaction. For intra-organizational processes, they can be divided into smaller I-steps that adhere to ACID properties. Each I-step has a compensating step in case of failure.

Similar to this idea, a contract-level inter-organizational process is divided into X-steps, each of which corresponds to one or more I-steps.

With the components of I-steps, X-steps and compensating steps, the X-transactional model realizes a flexible intra- and inter-organizational rollback effect so as to support all the scenarios with all combinations of rollback scopes and rollback modes.

Challenges of electronic business transactions The complexity of transactions that span multiple organisations rises in loosely coupled distributed computer networks that are enabled by SOC.

Additionally, business processes must be inter- organizationally integrated that use databases involved in a transaction. The transactional parts of a business process that are supported by workflow systems, are usually referred to as a business transaction. For example if the ACID model is used, this means that the whole business process fails if one activity fails. Statically coupled workflow systems are not suitable when doing eBC in a highly dynamic environment on ad hoc basis mans a flexible, dynamic and loose coupling of systems must take place.

Instead, combining multiple web services in a business process offers a solution for eBC. However the traditional ACID-properties applied to business transactions are too strong when web services are employed, as web-service based business transactions are part of long running business processes. The reason is that data from resources that are at the back-end of web services need to be locked in order to assure atomicity and isolation.

However, locking data for isolation in long running transactions is unrealistic as this might block resources for long periods of time that are consequently not available for others. For example, locking tables for selling a product blocks other potential customers, results in lower turn-over and prevents other organisations from participating in the business process.

Currently, the SOA does not cater for interactions between business processes of different organisations that involve business agreement descriptions together with reliability features. For example, when the strict ACID properties are used, a business process can not be continued in an unaffected way in case an eBT is cancelled.

The synchronization of business processes between organisations must be part of a wider business coordination protocol that defines the publicly agreed business interactions between business parties that is based on web services.

Additionally, well founded possibilities are missing to compose eBTs out of several transaction models, like for example an ACID transaction and an open nested transaction [61] to support long running transactions for heterogeneous systems that are integrated in a loosely coupled fashion. Features of electronic business transactions An eBT is automated, complex, long running and may involve multiple internal and external parties. Additionally, an eBT requires commitments to the transaction that needs to be negotiated by the participating organisations [41].

Further features of an eBT are support for the formation of contracts, shipping and logistics, tracking, varied payment instruments and exception handling. Firstly, they extend the scope of traditional transaction processing as they may encompass classical transactions which they combine with non-transactional processes. Secondly, they group both classical transactions as well as non-transactional processes together into a unit of work that reflects the semantics and behavior of their underlying business task.

Thirdly, they are governed by unconventional types of atomicity. Payment atomicity is the basic level of atomicity that each electronic commerce protocol should satisfy. These are normally based on electronic commerce protocols which include the exchange of financial information services and the exchange of bills and invoices. Thus payment-atomic protocols must also be contract-atomic. In the world of e-commerce, traditional database transactions are replaced with long lived, multi- level collaborations.

In the web services of an eBC embody business functions that are connected across organizational domains, which requires correlation and coordination mechanisms for keeping track of the ongoing inter-organizational process and transactions involved in that process.

At the same time the autonomy of the participating organisations must not be infringed. Hence, the overall transactional capabilities of an eBT depend on the transactional capabilities of the participating web services. To orchestrate loosely coupled web services into a business transaction and eventually into one single, high-level, overall business process with guaranteed, coordinated and predictable outcomes for all participating organisations, requires a failure-resilient coordination protocol together with the durable storage of process progress.

Additional features of such a framework need to include the creation of complex processes with different activities involving web service operations that are part of dynamic service compositions, manipulations and coordination of data flows, exception and error handling mechanisms and termination of processes.

Managing electronic business transactions The integrated heterogeneous systems of an eBC need to be loosely coupled because of different reliability requirements that exist within long running eBTs. Different reliability requirements result from the properties of an eBT such as the phase the transaction is in and the level in which the transaction is taking place. In [49] a phased model is introduced that distinguishes between pre-transaction, main transaction and post-transaction phases in a collaborative business process.

In [27] the need for a three level process framework is identified as companies are not willing to directly connect their legacy system.

Presenting the backend systems applications as services, i. Along a time line, the external-level phases of an eBT are visualized that need to be coordinated with the eBT phases on the conceptual level within an organization. Finally, the conceptual-level coordinates the legacy system of the internal level, e. The latter give technical feedback to the higher level to inform about the success or failure of a transaction.

Likewise, the conceptual level releases coordination information to the external level for aligning an eBT with the domain of the collaborating counterpart. The theory of spheres of control [4] originates from the domain of traditional database transactions. So called workflow spheres [39] expand the transaction theory into the dynamic world of complex business processes. Those concepts are applied in [31] for analyzing atomicity criteria dependencies and atomicity spheres.

This work, does not relate the workflow concepts of highly dynamic inter-organization processes. In the work of [55] a substantial emphasis is put on the characteristic atomicity properties of e- business. These unconventional atomicities for spheres in electronic business transactions eBT are explored and related [49] to each other along the categories system-level atomicity, business- interaction atomicity, and operational-level atomicity.

These atomicities need to be part of a transaction model that pays attention to the business realities that form the context of eBC. The need for a comprehensive and flexible transactional support is addressed in the XTC project [59] eXecution of Transactional Contracted electronic services.

By means of a business transaction framework BTF , the XTC project lays a transactional foundation for processes a contract-driven and service-oriented environment. In the sequel of this paper, more details about BTF are presented.

Industry initiatives for electronic business transactions A technique to guarantee the consistency and reliability of web-service applications is needed.

However, no transaction mechanism is widely accepted as a standard. Currently, there are three possible candidates, which are presented in the following subsections, followed by a comparison of the approaches 3. BTP is instrumental for representing and seamlessly managing complex, multi-step business-to-business B2B transactions over the Internet to ensure consistent outcomes of parties that use applications that are disparate in time, location and administration and participate in long running business transactions [40].

In a BTP compliant web service environment, a transaction manager confirms or cancels the backend system a web service encapsulates. Hence, a direct communication exists between the transaction manager and the backend system, which contradicts the web-service philosophy.

Gray J., Reuter A. Transaction Processing: Concepts And Techniques

Opening up backend systems to play the role of participant within the transaction for external parties introduces security issues and bypasses the purpose of web services. Every phase of a transaction within BTP [23] stands on its own and may be implemented in any way by a BTP compliant web service or application. In BTP, during the first phase of a transaction, the participants perform provisional or tentative state changes that are called the provisional effect.

In the second phase the participants complete the transactions either through a confirmation that is called the final effect or through a cancellation that is called the counter effect. Additionally, using business logic in BTP, the application also determines which participants to commit called a consensus group and which to cancel.

If a consensus group is specified as an atom, it is guaranteed that the transaction outcome of this consensus group is atomic, meaning that either all participants confirm or all participants cancel. If a consensus group is specified as a cohesion, the atomicity property of the group is relaxed compared to an atom.

The application itself determines using business logic which participants to confirm or cancel. Cohesions are used to model long running transactions in which participants should commit their results form a, so called, confirm set.

The confirm set itself is in turn an atom, as all members of this set should complete successfully 3. The specifications are aimed at the reliable and consistent execution of web based business transactions using different interconnected web services.

This way WS-C provides a generic coordination infrastructure for web services, making it possible to plug in specific coordination protocols [24, 42]. The coordination framework specifies three services necessary to support coordination protocols.

Navigation menu

The Coordination Service ensures that the registered web services are driven through completion by using the selected protocol. The protocol defines the behaviour and the operations that are required for the completion of an activity.

Traditionally, these systems are heterogeneous and coupling these systems together within one organization is the first step towards interoperability. Any participating systems in the transaction can abort the entire transaction. This requires a high mutual trust between the participants in such a transaction, making it hard to use this protocol for inter-business transactions.

Details of these protocols can be found in the WS-AT specification [14]. Atomic transactions handle system generated exceptions transparently from the application that drives the transaction. The higher-level application that drives the business activity and spans multiple atomic transactions does not deal with those system-generated exceptions.

Instead, the higher-level application can focus on the handling of business exceptions. Hence, WS-BA uses atomic transactions to preserve the autonomy of participating organizations whilst at the same time providing mechanisms to reach overall agreement.

Compensating actions may be registered with the parent activity to undo completed child tasks. Exception handlers make use of application logic so that the overall business activity can continue, i. Results of completed tasks, e. Participants are autonomous and can exit activities at will, thereby delegating processing to other scopes participants or exiting without knowing the outcome of the protocol.

This feature is similar to the resignation by participant optimization found in BTP. In case the outcome is negative, i. The state of the business activity is made persistent between steps in order to reach a desired goal, even if exceptions occur. The WS-BA specification defines two types for a coordinator, namely the atomic outcome type and mixed outcome type.

The first type requires the coordinator to drive all participants to the same final state. The latter type allows a coordinator to choose which participants need to commit or compensate.

The behaviour of the coordinator is determined by the application driving the activity. The reader is referred to the specification for details on these protocols [15].

Management of the Logical Database and its Underlying Physical Structure

Each specification covers a certain level of the overall architecture required to build reliable business applications that span multiple systems and use web service technology. However context information is not just relevant for transactions. The person who wants a loan fills out a form, which is then checked by a loan officer. An employee who processes loan applications verifies the data in the form, using sources such as credit-reference bureaus.

When all the required information has been collected, the loan officer may decide to approve the loan; that decision may then have to be approved by one or more superior officers, after which the loan can be made.

Each human here performs a task; in a bank that has not automated the task of loan processing, the coordination of the tasks is typically carried out by passing of the loan application, with attached notes and other information, from one employee to the next. Other examples of workflows include processing of expense vouchers, of purchase orders, and of credit-card transactions. Workflow in loan processing. Today, all the information related to a workflow is more than likely to be stored in a digital form on one or more computers, and, with the growth of networking, information can be easily transferred from one computer to another.

Hence, it is feasible for organizations to automate their workflows. For example, to automate the tasks involved in loan processing, we can store the loan application and associated information in a database. The workflow itself then involves handing of responsibility from one human to the next, and possibly even to programs that can automatically fetch the required information.

Humans can coordinate their activities by means such as electronic mail. We have to address two activities, in general, to automate a workflow. The first is workflow specification: detailing the tasks that must be carried out and defining the execution requirements. The second problem is workflow execution, which we must do while providing the safeguards of traditional database systems related to computation correctness and data integrity and durability.

For example, it is not acceptable for a loan application or a voucher to be lost, or to be processed more than once, because of a system crash. The idea behind transactional workflows is to use and extend the concepts of transactions to the context of workflows. Both activities are complicated by the fact that many organizations use several independently managed information-processing systems that, in most cases, were developed separately to automate different functions.

Workflow activities may require interactions among several such systems, each performing a task, as well as interactions with humans.

A number of workflow systems have been developed in recent years. Here, we study properties of workflow systems at a relatively abstract level, without going into the details of any particular system. Workflow Specification Internal aspects of a task do not need to be modeled for the purpose of specification and management of a workflow.

Advanced Transaction Processing - Database system concepts

In an abstract view of a task, a task may use parameters stored in its input variables, may retrieve and update data in the local system, may store its results in its output variables, and may be queried about its execution state. The coordination of tasks can be specified either statically or dynamically. A static specification defines the tasks and dependencies among them before the execution of the workflow begins.

For instance, the tasks in an expense-voucher workflow may consist of the approvals of the voucher by a secretary, a manager, and an accountant, in that order, and finally by the delivery of a check. The dependencies among the tasks may be simple each task has to be completed before the next begins.

A generalization of this strategy is to have a precondition for execution of each task in the workflow, so that all possible tasks in a workflow and their dependencies are known in advance, but only those tasks whose preconditions are satisfied are executed. An example of dynamic scheduling of tasks is an electronic-mail routing system. The next task to be scheduled for a given mail message depends on what the destination address of the message is, and on which intermediate routers are functioning.

Failure-Atomicity Requirements of aWorkflow The workflow designer may specify the failure-atomicity requirements of a workflow according to the semantics of the workflow. The traditional notion of failure atomicity would require that a failure of any task results in the failure of the workflow.

However, a workflow can, in many cases, survive the failure of one of its tasks for example, by executing a functionally equivalent task at another site. Therefore, we should allow the designer to define failure-atomicity requirements of a workflow.

The system must guarantee that every execution of a workflow will terminate in a state that satisfies the failure-atomicity requirements defined by the designer. We call those states acceptable termination states of aworkflow.

All other execution states of a workflow constitute a set of nonacceptable termination states, in which the failureatomicity requirements may be violated. An acceptable termination state can be designated as committed or aborted. A committed acceptable termination state is an execution state in which the objectives of a workflow have been achieved. In contrast, an aborted acceptable termination state is a valid termination state in which a workflow has failed to achieve its objectives.

A workflow must reach an acceptable termination state even in the presence of system failures. Thus, if a workflow was in a nonacceptable termination state at the time of failure, during system recovery it must be brought to an acceptable termination state whether aborted or committed.

For example, in the loan-processing workflow, in the final state, either the loan applicant is told that a loan cannot be made or the loan is disbursed.

In case of failures such as a long failure of the verification system, the loan application could be returned to the loan applicant with a suitable explanation; this outcome would constitute an aborted acceptable termination. A committed acceptable termination would be either the acceptance or the rejection of the loan. In general, a task can commit and release its resources before the workflow reaches a termination state. However, if the multitask transaction later aborts, its failure atomicity may require that we undo the effects of already completed tasks for example, committed subtransactions by executing compensating tasks as subtransactions.

The semantics of compensation requires that a compensating transaction eventually complete its execution successfully, possibly after a number of resubmissions. In an expense-voucher-processing workflow, for example, a department-budget balance may be reduced on the basis of an initial approval of a voucher by the manager. If the voucher is later rejected, whether because of failure or for other reasons, the budget may have to be restored by a compensating transaction.

Execution of Workflows The execution of the tasks may be controlled by a human coordinator or by a software system called a workflow-management system. A workflow-management system consists of a scheduler, task agents, and a mechanism to query the state of the workflow system. A task agent controls the execution of a task by a processing entity. A scheduler is a program that processes workflows by submitting various tasks for execution, monitoring various events, and evaluating conditions related to intertask dependencies.

A scheduler may submit a task for execution to a task agent , or may request that a previously submitted task be aborted. In the case of multi database transactions, the tasks are subtransactions, and the processing entities are local database management systems. In accordance with the workflow specifications, the scheduler enforces the scheduling dependencies and is responsible for ensuring that tasks reach acceptable termination states. There are three architectural approaches to the development of a workflow-management system.

A centralized architecture has a single scheduler that schedules the tasks for all concurrently executing workflows. The partially distributed architecture has one scheduler instantiated for each workflow. When the issues of concurrent execution can be separated from the scheduling function, the latter option is a natural choice.

A fully distributed architecture has no scheduler, but the task agents coordinate their execution by communicating with one another to satisfy task dependencies and other workflow execution requirements.

The simplest workflow-execution systems follow the fully distributed approach just described and are based on messaging. Messaging may be implemented by persistent messaging mechanisms, to provide guaranteed delivery. Some implementations use e-mail for messaging; such implementations provide many of the features of persistent messaging, but generally do not guarantee atomicity of message delivery and transaction commit.

Each site has a task agent that executes tasks received through messages. Execution may also involve presenting messages to humans, who have then to carry out some action.

When a task is completed at a site, and needs to be processed at another site, the task agent dispatches a message to the next site. The message contains all relevant information about the task to be performed. Such message-based workflow systems are particularly useful in networks that may be disconnected for part of the time, such as dial-up networks. The centralized approach is used in workflow systems where the data are stored in a central database.

The scheduler notifies various agents, such as humans or computer programs, that a task has to be carried out, and keeps track of task completion. It is easier to keep track of the state of a workflow with a centralized approach than it is with a fully distributed approach. The scheduler must guarantee that a workflow will terminate in one of the specified acceptable termination states.

Ideally, before attempting to execute a workflow, the scheduler should examine that workflow to check whether the workflow may terminate in a nonacceptable state. If the scheduler cannot guarantee that a workflow will terminate in an acceptable state, it should reject such specifications without attempting to execute the workflow.

As an example, let us consider aworkflow consisting of two tasks represented by subtransactions S1 and S2, with the failure-atomicity requirements indicating that either both or neither of the subtransactions should be committed. If S1 and S2 do not provide prepared-to-commit states for a two-phase commit , and further do not have compensating transactions, then it is possible to reach a state where one subtransaction is committed and the other aborted, and there is no way to bring both to the same state.

Therefore, such a workflow specification is unsafe, and should be rejected. Safety checks such as the one just described may be impossible or impractical to implement in the scheduler; it then becomes the responsibility of the person designing the workflow specification to ensure that the workflows are safe. Recovery of a Workflow The objective of workflow recovery is to enforce the failure atomicity of the workflows.

The recovery procedures must make sure that, if a failure occurs in any of the workflow-processing components including the scheduler , the workflow will eventually reach an acceptable termination state whether aborted or committed. For example, the scheduler could continue processing after failure and recovery, as though nothing happened, thus providing forward recoverability. Otherwise, the scheduler could abort the whole workflow that is, reach one of the global abort states.

In either case, some subtransactions may need to be committed or even submitted for execution for example, compensating subtransactions. We assume that the processing entities involved in the workflow have their own local recovery systems and handle their local failures.

To recover the execution environment context, the failure-recovery routines need to restore the state information of the scheduler at the time of failure, including the information about the execution states of each task. Therefore, the appropriate status information must be logged on stable storage. We also need to consider the contents of the message queues. When one agent hands off a task to another, the handoff should be carried out exactly once: If the handoff happens twice a task may get executed twice; if the handoff does not occur, the task may get lost.

Persistent messaging provides exactly the features to ensure positive, single handoff. Workflow Management Systems Workflows are often hand coded as part of application systems. For instance, enterprise resource planning ERP systems, which help coordinate activities across an entire enterprise, have numerous workflows built into them. The goal of workflow management systems is to simplify the construction of workflows andmake them more reliable, by permitting them to be specified in a high-level manner and executed in accordance with the specification.

Workflows that cross organizational boundaries are becoming increasingly common. For instance, consider an order placed by an organization and communicated to another organization that fulfills the order. In each organization there may be a workflow associated with the order, and it is important that the workflows be able to interoperate, in order to minimize human intervention.

TheWorkflowManagement Coalition has developed standards for interoperation between workflow systems. Current standardization efforts use XML as the underlying language for communicating information about the workflow.

See the bibliographical notes for more information. Main-Memory Databases To allow a high rate of transaction processing hundreds or thousands of transactions per second , we must use high-performance hardware, and must exploit parallelism. The long disk latency about 10 milliseconds average increases not only the time to access a data item, but also limits the number of accesses per second.

We can make a database system less disk bound by increasing the size of the database buffer. Advances in main-memory technology let us construct large main memories at relatively low cost. Today, commercial bit systems can support main memories of tens of gigabytes. For some applications, such as real-time control, it is necessary to store data in main memory to meet performance requirements.

The memory size required for most such systems is not exceptionally large, although there are at least a few applications that require multiple gigabytes of data to be memory resident. Since memory sizes have been growing at a very fast rate, an increasing number of applications can be expected to have data that fit into main memory. Large main memories allow faster processing of transactions, since data are memory resident.

However, there are still disk-related limitations: Log records must be written to stable storage before a transaction is committed. The improved performance made possible by a large main memory may result in the logging process becoming a bottleneck. We can reduce commit time by creating a stable log buffer in main memory, using nonvolatile RAM implemented, for example, by battery backed-upmemory.

The overhead imposed by logging can also be reduced by the group-commit technique discussed later in this section. Throughput number of transactions per second is still limited by the data-transfer rate of the log disk. Buffer blocks marked as modified by committed transactions still have to be written so that the amount of log that has to be replayed at recovery time is reduced.

If the update rate is extremely high, the disk data-transfer rate may become a bottleneck. If the system crashes, all of main memory is lost. On recovery, the system has an empty database buffer, and data items must be input from disk when they are accessed.

Therefore, even after recovery is complete, it takes some time before the database is fully loaded in main memory and high-speed processing of transactions can resume. On the other hand, a main-memory database provides opportunities for optimizations: Since memory is costlier than disk space, internal data structures in mainmemory databases have to be designed to reduce space requirements. There is no need to pin buffer pages in memory before data are accessed, since buffer pages will never be replaced.

Query-processing techniques should be designed to minimize space overhead, so that main memory limits are not exceeded while a query is being evaluated; that situation would result in paging to swap area, and would slow down query processing. Such bottlenecks must be eliminated by improvements in the implementation of these operations. Recovery algorithms can be optimized, since pages rarely need to be written out to make space for other pages. TimesTen and Data Blitz are two main-memory database products that exploit several of these optimizations, while the Oracle database has added special features to support very large main memories.

To ensure that nearly full blocks are output, we use the group-commit technique. Instead of attempting to commit T when T completes, the system waits until several transactions have completed, or a certain period of time has passed since a transaction completed execution.

It then commits the group of transactions that are waiting, together. Blocks written to the log on stable storage would contain records of several transactions.

By careful choice of group size and maximum waiting time, the system can ensure that blocks are full when they are written to stable storage without making transactions wait excessively. This technique results, on average, in fewer output operations per committed transaction. Although group commit reduces the overhead imposed by logging, it results in a slight delay in commit of transactions that perform updates.

The delay can be made quite small say, 10 milliseconds , which is acceptable for many applications. These delays can be eliminated if disks or disk controllers support nonvolatile RAM buffers for write operations.As of today, most organizations use a database management system to support OLTP. Now, if the outer-level transaction T has to be aborted, the effect of its subtransactions must be undone. Unlike the non- atomic chained transactions that cannot undo the committed sub-transactions in the case of an abort, sagas can use compensating sub-transactions to return the whole transaction back to the very 8 beginning.

Chained Transactions and Sagas Workflow Specification Internal aspects of a task do not need to be modeled for the purpose of specification and management of a workflow.

TREENA from Nevada
Review my other articles. I enjoy squirt boating. I do love studying docunments quietly.