It is a three tier architecture which consists of user interfaces like data services designer, management console and the application tier consists of job server and finally we will have local and central repositories where all the objects of systems will be stored. Now let us go into each component individually.
Data Services Designer:
It is the place where we develop, validate and execute the ETL jobs. We can connect to the central repository only through designer. We can import and export the jobs to file or repository from designer as well.
Local Repository:
It contains all the user defined objects like projects, jobs, workflows, data flows etc.,
It also contains the system defined objects like different transforms that are used to manipulate the source data as per the business rules. The connection to the database server is also created in the local repository only. We call it as data store. We can create a repository using the repository manager that comes with the installation.
The steps to create repository includes
Start-> programs-> BI Data services-> Repository Manager -> Select type of repository(local or central or profiler) -> Repository Name-> Connection Name-> Database credentials-> create.
Central Repository:
It is also same like local repository. It also stores both the user defined and pre defined objects. It is used to serve the multi user environment. In case if it is not a multi user environment, its usage is optional. In case of multi user environment, when one user is accessing an application, it can’t be accessed by any other user. We use check-in and check-out mechanism for this purpose which is similar to locking. We can create a repository using the repository manager that comes with the installation.
The steps to create repository includes
Start-> programs-> BI Data services-> Repository Manager -> Select type of repository(local or central or profiler) -> Repository Name-> Connection Name-> Database credentials-> create.
Job Server:
The job server consists of .bat files (contains jobs) and its work is to start the job engine which executes jobs.
Job Engine:
It executes the requested jobs.
Access Server:
This is used for DI Real-time service jobs. This Server controls the XML message passing between the source and target nodes in Real-Time jobs.
Web Server:
This is used for the administration activities. We can schedule jobs, execute jobs, creating users etc., are the tasks that can be done through this server.
So these are all the components that exist in BODS architecture. So when we need to create an ETL job, we will request the local repository through for the source data which in turn requests the connected database. Then we request the required transform. Then we will create the job in designer. Then we save it in local repository (may be central repository too as per requirement). Then we execute it through job server and engine. This is how the flow goes on. Source or target can be SAP or NON_SAP system.
SAP Business Objects Data Services - Architecture
Reviewed by Pubudu Dewagama
on
9:38:00 PM
Rating:
Superb work. Thanks much Pubudu.
ReplyDeleteSure, Thanks for your Valuable Comment.
DeleteThanks for sharing this Information. SAP BO Training Course in Gurgaon
ReplyDelete