Email Notification in FileNet P8

September 13, 2016 Comments off

It’s been a while I was away from the Blog, so here it is what I have today based on few of the email requests; I am trying to familiarize all the developers out there about the Email Notification facility present in FileNet and how do we enable & configure this feature in order to send out an email to the intended users based on the reminders we have set or on initiation of any step that we might have configured.

With the help of Email Notification, the workflow can be set to send email to the intended users. The Notification can be sent on completion of any step, the reminder mails and also for deadlines.

To use this facility, system must have access to SMTP (Simple Mail Transfer Protocol) server and also Email Notification parameters needs to be configured and enabled. Also we have to create Email Notification template and user preference needs to be set. It consists of 4 steps:

  • Enabling Email Notification
  • Configuring Email Notification
  • Modifying Email Notification template
  • Setting user Preference

Enabling Email Notification

For Enabling Email notification some event logging option has to be ‘turned on’ on application engine with the help of process configuration console.

1.1)  Select isolated region icon on PCC (Process Configuration Console) and click on properties tab on it.

picture1

1.2) After clicking of properties tab on the event logging action tab turn on the checkbox for Exception and Begin operation categories.

picture2

2)  Configuring email Notification

2.1) Start the Process Task manager from the process engine in the start menu. On Application engine we have to run routercmd.bat command file that is found in the \Router subfolder where the Application Engine is installed

2.2) Configure email notification with the help of process task manager on the process engine. Select the Process Service option and click on notification tab.

picture3

SMTP Host: It specifies the complete host name for an SMTP server on the network running the SMTP service.

SMTP Port: The port on which SMTP service is running.

Character Set: The Character used in encoding the email message.

Email Logon ID: The name used for the account you want the Process Engine to use to log on to the mail server application.

Email Logon Password: The password associated with the Email logon id.

Email From ID: The name you want to enter in the From field of email Notification.

Encode From field: Indicates whether to encode the contents of the Email from ID field.

3) Modifying Email Notification

3.1) Navigate to ..\fnsw_loc\sd\mas\1 directory

picture4

3.2) Change the properties of the mail which we want to modify from read-only to read-write.

3.3) Open the file using Textpad editor and modify the html content of the file.

3.4) We can enter the fields which we want in our template.

For example, in the file below we have selected subject to get displayed in the notification. The value of that subject will be obtained from $F_Subject variable which get sets in event logs.

picture5

3.4) Save the file and restart the process engine server.

4. Setting User Preference

User preference can be set from the workplace depending on the Email notification type which user wants to receive.

picture6

picture7

And we are done, there’s a lot more information that you can find in the FileNet ECM help file.

Hope this post helps and answers your email.

Happy Blogging!!!

kVisia: Product Overview

March 18, 2015 Comments off

Howdy Users and Buddies,

Today I want to share on kVisia, a product that provides the core functionality to automate, control and deliver an organization’s engineering and enterprise business processes to the desktop or Web. It holds the business rules, executes the XML configurations, builds the business forms, and implements the lifecycles that automate an organization’s business processes.

I think the only Prerequisites that you need to know is , Basic concept of client-server applications and Documentum Architecture.

Let me put it in Content Wise for better understanding with a bit of background of what’s there , what’s missing and how this can be achieved.

  1. Abstract

Today the current era is defined by the terms like data, information, knowledge, wisdom etc. and in this electronic world the difficulty to manage data in abundance has begotten many technologies and one of them is the Documentum.

Documentum is nothing other than a Content Management Tool but its vastness and its ability to cater to almost all sorts of data types available are so rich in itself that these days it is widely used. If we undo the Documentum technically then it gives an edge over the Relational Database Management System by not storing only the Meta data but also storing the content in its native format.

kVisia blends with Documentum to bolster the level and depth of automation which can be achieved by the Documentum alone and this is what this document discusses in detail.

  1. Introduction

kVisia comes of McLaren Software Limited and users need to have a license before using it. McLaren is ISO 9001 certified and it has been accredited Independent Software Vendor for Documentum and FileNet. It has got 360+ customers worldwide and it operates from UK, USA and Switzerland.

In fact kVisia is an Enterprise Engineer product that encapsulates best practices and domain expertise. This is a user configurable application consisting of XML files managed by repository and standardized on a single ECM platform. It is also believed to be powered by Documentum.

For better understanding of the kVisia we can go through the following questions and answers that will help one to identify the need.

How does it help me?

Very quickly configure the user interface and deliver it to the Desktop and Web users.

Is it easy to change?

Very easy, configuration files are stored in the Docbase in XML format.

Configurable interface using objects and tables.

Do the configuration files allow me to control inputs?

Very complex business rules can be enforced via the XML files.

Validations at creation, promotion and check in.

  1. kVisia suite

kVisia suite consists of following three components:

  1. McLaren Studio
  2. McLaren_Core DocApp
  3. McLaren_Foundation DocApp

3.1 McLaren Studio

McLaren Studio provides a user-friendly desktop application that allows users to create and edit XML configuration files that will conform to the rules of the Document Type Definition (DTD).

An XML (eXtensible Markup Language) configuration file is a structured file consisting of many elements. Elements are ordered into a tree structure or hierarchy. Elements may have parent or child elements, and may also contain attributes. An element without a parent element is the highest in a tree structure and is called a root element. Not all of the elements will have child elements.

The structure of a valid XML file is defined in a DTD (Document Type Definition) file. The DTD contains the rules that apply to each XML configuration file; these include the definitions of parent and child elements, element attributes, and how each element can be used

The appearance and behaviour of user dialogs can be modified without the need to change programme code simply by changing settings in the appropriate XML configuration file. A definition of the appearance and behaviour of a dialog is referred to as a “configuration”. A common need, however, is to display different configurations under different circumstances. For example, members of different groups may need to fill in different properties on documents, and the document’s properties may change as it moves from state to state in a lifecycle. The definition of such a scenario is referred to as a “mapping”. When you do not want to differentiate functionality between different groups, lifecycles and states, there is a value called {default} that will apply to all circumstances.

The Actions for which dialogs can be defined using an XML configuration file are:

 New

When the user issues the New command, a customized dialog can be displayed. Commonly deployed features include automatic document numbering, interactive project allocation and any additional attributes (optional or mandatory), which may be required to be stored for the document.

 Properties

A customized Properties dialog can be displayed. Fields can be made editable or read-only as required.

 Import

When the user performs an import, a customized dialog can be displayed which may typically employ features similar to those used in the New dialog.

 Copy

When the user performs a copy and paste action, a customized dialog can be displayed which may typically employ features similar to those used in the New dialog.

 QuickFind

For each document type, a search dialog referred to as QuickFind can be defined and viewed by the user in the Docbase through the McLaren menu. The attributes that can be searched on are configurable and search options such as Between, Greater Than, Containing, etc., can be specified.

 3.2 McLaren_Core DocApp

Having installed kVisia you get two sets of Docapps in the Docbase, one is the McLaren_core and the other one is McLaren_foundation. The McLaren_core Docapp is available to the user of a Documentum repository. It is always suggested that one should have the overview of the additional functionality as a result of the deployment of the kVisia before configuring the product to suit the business need.

As the name suggests McLaren_core Docapp provides the basic functionality like New, Import, Properties, Copy, QuickSearch etc to be configured. As discussed above, we use the XML configuration tool McLaren Studio to configure it.

For each Object Type used in the repository, an XML Configuration File can be created and configured to define the appearance and behavior of the various dialogs for documents of that Object Type. The definition of the appearance and behavior of a dialog is referred to as a configuration, and different configurations can be displayed in different circumstances.

3.3 McLaren_Foundation DocApp

The McLaren_foundation installation enables you to deploy a pre-configured implementation of the Core technology which transforms it into a user application specifically adapted to the engineering environment, providing a ready-for-use engineering repository. It deals with the following configurations:

User Roles that is associated with Menu Systems and User Dialogs that are adapted to those roles.

Object Types for documents and folders.

XML Configuration Files that define the content, appearance and behavior of the New, Copy, Properties, Import and QuickFind dialogs for the supplied Object Types in accordance with the current user’s User Role and the document’s lifecycle and state, including pre-configured, automatically generated document numbering.

Check in dialogs which are specifically adapted for each supplied type of engineering document.

Automated document revision numbering that reflects common design practice. Revision numbers start at 0 and the first issued version is 1, the second 2, the third 3, etc. When a document is checked in for the first time, a numeric value is assigned and incremented each time it is checked in.

  1. Traditional – VB (Desktop) Vs kVisia(Desktop & Web)

Following Comparison lets you know how much flexibility kVisia provides and how much time and effort it saves.

Traditional – VB (Desktop) kVisia (Desktop and Web)
Ø  Obtain VB source code.

Ø  Open VBP.

Ø  Edit GUI.

Ø  Add code to populate list.

Ø  Add validation rules.

Ø  Save source code.

Ø  Compile DLL.

Ø  Create CAB file.

Ø  Open DocApp in Composer

Ø  Edit the component ACX and

replace CAB file.

Ø  Check DocApp back in.

Ø  Not available until next user login.

Ø  Check out XML.

Ø  Edit XML kVisia Studio.

Ø  Add XML tag.

Ø  Specify properties.

Ø  Save and Check back in.

Ø  Change available immediately (no logout required).

  1. Benefits
  • Easy to manage development process as applications are configured via XML using Studio
  • Fast route from innovation to user acceptance
  • Shorter time from design through to development
  • Shorter time to deploy to the business users
  • Reduces development costs
  • Future upgrades are minimised
  • Simple deployment and at a local level of control
  • Speeds up implementation process through rapid configuration

Hope you like the post, Feel free to post your comments and I will reply back to any queries that you have.

Adios , Have a great day ahead…

Documentum Folder Bulk Export

February 10, 2013 Comments off

Hi Readers,

It’s been a while I was away from my blog…here I am back with all your requests.

Someone wrote a note to post something on Folder Export…Bringing it to you:

There is often a requirement of exporting an entire folder structure from a Documentum file repository to the local file system. The Documentum web applications allow export of only a single file not an entire folder structure with all files.

This document intends to illustrate the usage of a free tool ‘Documentum Deep Export Utility’ available on the EMC Developer Community site to achieve the objective of exporting an entire folder structure with all files from Documentum to the local file system.

This document also details the procedure for enabling Deep Export on Documentum 6.5 natively, enumerating the pros and cons of using the native functionality v/s the external utility.

Let me categorizes in sections:

Scope

The cabinet structure in Documentum is required to be replicated to the local file system, including the folder structure and all files. Export of metadata from Documentum is not included in scope.

Recommendations

  • Ensure that a fast data connection to the content server is available
  • Ensure that there is sufficient disk space for exporting the files
  • This utility has not been tested for docbases which have a very high degree of interlinked folders.

Enabling Native Deep Export Functionality on Documentum 6.5

This method will enable the native Deep Export functionality introduced in Documentum 6.5. This will enable a right click menu item in Webtop when the user right clicks on a folder. The menu item will be labeled as ‘Export’. The whole folder will be saved to the user specified directory sans the metadata.

  • Deep export of a folder having special characters (for example,:,?, <>, ʺ, |, *) are not supported. If you try to export a folder having such special characters, then the application throws an error.
  • Deep export of a hidden folder is allowed when you export a parent folder. If a folder is not visible in Webtop, then all the sub‑folders including hidden folders get exported during Deep export.
  • Deep export is supported for UCF content transfer, not HTTP content transfer.
  • Only the primary content is exported, no renditions are exported.
  • Only the current versions of documents are exported.
  • A VDM root and its children are exported starting from the level that is present in the folder selected for Deep export.
  • If the child of a VDM is in multiple folders, it is only exported once.
  • Deep export is trimmed down not to support VDM

Image

Enabling Deep Export:

  • Go to the webapps folder where Webtop application is installed. If Apache Tomcat is being used the location would be “C:\Program Files\Apache Software Foundation\Tomcat 7.0\webapps\webtop”
  • In the Webtop folder, go to the ‘wdk’ folder and open the ‘app.xml’ file
  • Search for this text:

<!– Enable / Disable (true | false), the deep / folder export functionality. –>

         <deepexport>

            <enabled>true</enabled>

         </deepexport>

Native Deep Export in D6.5 Webtop will be enabled.

Performing Deep Export via the Documentum Deep Export Utility

Setting up the Environment

Ensure that the DeepExport utility is unzipped to a folder on the system with a content server installation

  1. Register the SBO. Go to C:\Documentum\Config
    Append the following line to the file ‘DBOR.PROPERTIES’

    com.documentum.devprog.deepexport.IDpDeepExportService=service,com.documentum.devprog.deepexport.DpDeepExportService,1.0

  2. If the source server where the files are to be exported from is a remote server, you need to set the dfc.properties to point to the remote server. If the source server is local, no changes to the dfc.properties are necessary
    Set ‘dfc.properties.
    Enter the name of the remote server in the ‘dfc.docbroker.host=<content server>’
  3. Assuming deepExport utility is extracted in the ‘C:\ deepExportService’ folder
    Go to this folder and open setEnv.bat
    Set Java path (including quotes)
    eg. JAVA_HOME=” C:\j2sdk1.6.0_27″
  4. Set path of dctm.jar & dfc.jar. Please note that though path of config directory is not mentioned in the manual included with the utility, you also need to include it for correct operation of the utility. This has been included in the example below

         eg. DCTM_JAR=C:\Program Files\Documentum\dctm.jar;C:\Documentum\config

Run the batch file from a command line

Image

Verify whether the path has been set correctly by typing echo %CLASSPATH% on the command line

Performing the Export

Go to the deepExport folder and edit the ‘testDeepExport.bat’

  1. Set the username and password to the username and password of a user of the docbase. Ensure that the user has sufficient permissions
  2. Set the docbase name variable
    eg. DOCBASE_NAME=TESTING
  3. Set the docbase source folder name variable
    DOCBASEFOLDER=”/lab_test/Bananas”
  4. Set the export folder on the local system
    FILESYSTEMDIRECTORY=”E:\\export”
    Please note that the path has all double backslashes and ensure that sufficient space is present on the system
  5. Execute the batch file

 

Image

The process takes quite some time. When the command prompt returns the utility would have finished the export.

Comparison of Utility v/s Native export

The advantage of the utility based export is that no changes are required to the documentum environment itself. The export can even be done remotely by setting the dfc.properties as instructed above. On the other hand, native export functionality is recommended if the export feature has to be provided to the end users as a permanent feature.

For more information , check out the below link:

http://developer.emc.com/developer/componentexchange.htm#0900c35580916f6d

 

 

Relationship and Virtual Documents in Documentum

November 26, 2011 Comments off

This Post explains how the concept of relationships and virtual documents can be used to relate objects in Documentum using Documentum Foundation Classes (DFC).

Definitions:

  1. Relationship: A relationship implies a connection between two objects. When an object is related to another object then we can define which object is the parent object or the child object or if they are equal. Relationships are system-defined as well as user-defined. In the BOK, we confine our self to user-defined relationships.
  2. Virtual Document: In Documentum, a document which holds other documents i.e. a document which acts as a container is called as a virtual document. A virtual document can contain other virtual documents. The document which acts as a container is called the parent document while the documents which are contained in parent document are called as child documents.

Overview on relationship:

Two built-in object types dm_relation and dm_relation_type are to be used to create relations between any two objects. The dm_relation_type object defines the behavior of the relation. The dm_relation object identifies the dm_relation_type object and the two objects between which the relation needs to be created. Pictorially it can be shown as:

Figure-1

The dm_relation_type object has following attributes:

  1. child_parent_label: It defines the child to parent feature of the relation.
  2. parent_child_label: It defines the parent to child feature of the relation.
  3. description: It gives the general description of the relation.
  4. parent_type: It defines the type of objects that will act as parents.
  5. child_type: It defines the type of objects that will act as child.
  6. direction_kind: It defines the nature of relationships between the objects. The expected values are:

a)    1 – Parent to Child

b)    2 – Child to Parent

c)    3 – Objects are at equal level

7. integrity_kind: It specifies the type of referential integrity used when either of the two related objects has to be deleted. The expected values are:

a)    0 – Any of the two related objects can be deleted.

b)    1 – As long as the relation exists, neither of the related objects can be deleted.

c)    2 – If one of the related objects gets deleted, other one also gets deleted.

      1. relation_name: Specifies a name for the relation
      2. security_type: It indicates the type of security to be used for the relation object. The valid values are:

a)    SYSTEM: If this value is used, then super-user privileges are required for creating, deleting or modifying the relationships pertaining to this dm_relation_type object.

b)    PARENT: In this case, the ACL for the relation is inherited from the parent object in the relation and RELATE permission is required to create, modify, or drop the relation. The exception to this is if the parent object is not a subtype of dm_sysobject, then no security will be enforced.

c)    CHILD: In this case, the ACL for the relation is inherited from the child object in the relation and RELATE permission is required to create, modify, or drop the relation. The exception to this is if the child object is not a subtype of dm_sysobject, then no security will be enforced.

d)    NONE – In this case, no security is applied. All users can create, modify or delete this kind of relationship.

The dm_relation object has following attributes:

  1. child_id: It is the r_object_id or i_chronicle_id of the child object in this relation. If i_chronicle_id is used, then ‘child label’ attribute can be used to bind the parent object to a particular version of child.
  2. parent_id: The r_object_id or i_chronicle_id of the parent object in the relations. If the attribute ‘permanent_link’ is set to TRUE then only use the i_chronicle_id of the object.
  3. permanent_link: If every new version of the parent object has to be related with the child object, then the value for this attribute must be set to TRUE and i_chronicle_id should be used in the parent_id attribute. By default the value is FALSE.
  4. relation_name: It specifies the value of relation_name attribute of the dm_relation_type object that defines the type of relationship.
  5. child_label <Optional>: If i_chronicle_id is used in the attribute ‘child_id’, then the label of the version of the child object is to be specified here.
  6. description <Optional>: Specifies the description.
  7. effective_date<Optional>: Not used by the system, a user-defined date. Custom logic could check this date to determine the state of the relationship.
  8. expiration_date<Optional>: Not used by the system, a user-defined date. Custom logic could check this date to determine the state of the relationship.
  9. order_no<Optional>: Not used by the system. Custom logic could use this integer value to order a set of relationships.
  • Creation of dm_relation_type object using DQL: The query used to create a dm_relation_type object is as follows:

create dm_relation_type object

set child_parent_label = ‘<Child to parent label>’,

set parent_child_label = ‘<Parent to child label>’,

set description = ‘<Description>’,

set parent_type = ‘<Document type>’,

set child_type = ‘<Document type>’,

set direction_kind = <0 or 1 or 2>,

set integrity_kind = <0 or 1 or 2>,

set relation_name = ‘<Name of Relation>’,

set security_type = ‘<SYSTEM or PARENT or CHILD or NONE>’

  • Creation of dm_relation object using DFC: Following methods which returns an object of type dm_relation can be used to create a dm_relation object.

1. addChildRelative(relationTypeName, childId, childLabel, isPermanent, description): This method has to be invoked on the object which is going to act as Parent in the relation. The parameters it takes are

a)    relationTypeName – Name of a valid dm_relation_type object.

b)    childId – The r_object_id or i_chronicle_id of the child object of the relation.

c)    childLabel – Version label of the child object. If this is ‘null’, the relation will contain no child label.

d)    isPermanent – Specifies if the link permanent. Valid values are TRUE or FALSE

e)    description – Specifies the description for the relation object. If ‘null’, the relation object will not have a description.

2.addParentRelative(relationTypeName, parentId, childLabel, isPermanent, description): This method has to be invoked on the object which is going to act as Child in the relation. It takes the same parameters as the method addChildRelative takes except instead of r_object_id or i_chronicle_id of the child object we pass the r_object_id or i_chronicle_id of the parent object as the parameter parentId.

Note: dm_relation object can be created through DQL also.

Overview on virtual document:

Virtual document provides a way for combining documents in various formats into one consolidated document. For e.g. one word document, one pdf document and one image can be combined to form one virtual document. There is no limitation regarding nesting of documents. One particular version or else all of the versions of a component can be combined with a virtual document. Two object types are used to store information about virtual documents. They are:

  1. Containment Object Type: It stores the information that links a component to a virtual document. Every time a component is added to a virtual document, a containment object is created for that component. The attributes of this object type can be set by the methods AppendPart, InsertPart, UpdatePart.
  2. Assembly Object Type:  An assembly object provides a snapshot of a virtual document at a given instance.

Creation of a virtual document using DFC:

A document can be converted to a virtual document by invoking the method setIsVirtualDocument on it. This method sets the r_is_virtual_doc of the document.

Note: Virtual documents can also be created using clients such as Webtop, DA etc as well as through DQL.

Requirement in the project:

Consider there are three main documents A, B and C. Essentially, A, B and C represent different contracts. Consider another set of documents A1, A2, A3, B1, B2, B3, C1, C2 and C3. A1, A2 and A3 are directly related to A, B1, B2 and B2 are directly related to B, C1, C2 and C3 are directly related to C. Also, A is related to B (A is child, B is parent), B is related to C (B is child and C is parent), and C is related to A (C is child and A is parent). The documents being referred here are Documentum documents of a certain system-defined or user-defined document type.

As per the requirements:

  1. For every new version of the documents the existing relations should be valid.
  2. From the document A, we should be able to navigate to A1, A2 and A3 and also to the documents B and C. Similarly for B and C.
  3. Depending on a particular attribute of main documents (A, B and C), there should be dynamic creation or deletion of relationships between contracts.

Resolution of requirement # 1:

Issue encountered:

Documentation says that when a dm_relation object is created with attribute values permanent_link = TRUE and child_label = ‘Current’ then for every new version of parent or child a new instance of the dm_relation object is created and the relation is created between the latest versions of child object and parent object. But on implementation of the same, always the latest version of the parent object was related to child object between which the relation was initially created.

Issue resolution:

To maintain the relation across the current versions of document and for easy navigation from parent to child documents the concept of virtual documents in addition to that of relationship was used.

All the main documents A, B and C were converted to virtual documents. The child documents A1, A2 and A3 were added as children to the newly converted virtual document A. For this the following DFC methods were used in the same order as specified:

  1. asVirtualDocument(lateBindingValue, followRootAssembly): This method is invoked on a virtual document (in this case on A, B and C) and it returns the virtual document representation of the virtual document object on which it is invoked. The parameters it takes are2.   getRootNode(): This method is invoked on the virtual document representation of a virtual document. It returns the root node of the virtual document. The root node is essentially the virtual document which is at the topmost hierarchy of virtual document tree. (In our case A, B and C are root nodes)
    1. lateBindingValue – the version label of the virtual document. To meet our requirement the value should be “Current”.
    2. folowRootassembly: If the value is set to TRUE, the assembly specified by the root node will be used as the virtual document.
    3. getRootNode(): This method is invoked on the virtual document representation of a virtual document. It returns the root node of the virtual document. The root node is essentially the virtual document which is at the topmost hierarchy of virtual document tree. (In our case A, B and C are root nodes
  2. addNode(parentNode, insertAfterNode, objectChronId, binding, followAssembly, overrideLateBindingValue): This method is invoked on the root node of a virtual document. This method adds a new node to the virtual document on which it is invoked. The parameters it takes are:
  • parentNode: The root node.
  • insertAfterNode: A virtual document node that will immediately precede the new node in the virtual document’s hierarchy. If this parameter is null, the new node is placed as the first child of parentNode.
  • objectChronId: i_chronicle_id of the document which is to be added as child to the virtual document. (In or case i_chronicle_id of A1, A2, A3, B1, B2, B3, C1, C2 and C3).
  • binding: The version label of the version of the child document with which we want to bind the child document with the virtual document.
  • followAssembly: It is set to TRUE if the follow_assembly attribute has to be set to TRUE for the component.
  • overrideLateBindingValue: It is set to TRUE if the version label identified in binding is to be used to resolve late-bound descendents of this component.

So using the concept of virtual document always the current versions of document are present in the parent virtual document. Thus the current versions of A and A1, A2, A3 are always related. And since A has been converted to virtual document, we can navigate to A1, A2 and A3 by just clicking on A.

Resolution of requirement # 2:

Now using the concept of relationship, a relation was created between A and B, B and C, A and C each. Thus navigation across the different contracts A, B and C was possible. Pictorially this can be shown as:

Figure-2

In the above figure the arrow denotes the relationship between two documents.

Resolution of requirement #3:

Requirement #3 states that there should be dynamic creation or deletion of relations between main documents depending on a particular attribute, say, attr. The value of attr for, let’s say, document A determines to which document, A will act as child. If the value of attr for document A is changed so as to imply that A and B are no longer related, then the relation object existing between A and B should be destroyed. If the new value points toward new document D, then a relation has to be created between A and D.

So the change in value of that particular attribute needs to be intercepted. The interception can be done as follows:

  1. Write a TBO (Type based object) for the document type to which A belongs to.                

 Note: For details on TBO refer BusinessObjectsDevelopersGuide.pdf provided by Documentum.

  1. In the TBO, override the method setString(attribute name, value of attribute), if the attribute attr is single-valued attribute or appendString(attribute name, value of attribute), if it is multi-valued attribute. These two methods captures all the attributes and their values for a document type. setString captures the single-valued attributes while appendString captures the multi-valued attributes.
  2. In either of the method, compare the old value of attribute attr with the new one. If there’s any change destroy all the existing relations which involves the document A as child. This can be done using the DFC method removeParentRelative(relationTypeName,parentId,childLabel).  Invoke this method on A. The parameters it takes are:Now use the method addParentRelative as explained, to relate document D as parent to document A.
    1. relationTypeName – Name of the relation object.
    2. parentId – r_object_id of parent object.
    3. childLabel – version label of child object.
  3. Every time the value is changed for attribute attr and the document is saved, the corresponding TBO is invoked and the above mentioned methods will be executed. Thus dynamic creation or deletion of relations can be achieved.

Conclusion: Thus the concept of relationships and virtual documents can be used together to relate objects in Documentum using Documentum Foundation Classes (DFC).

Documentum Full Text Index Server

November 2, 2011 Comments off

For faster and better search functionalities EMC has developed a Full Text Index Server which is installed separately with the Content management software to provide an index based search capability. In version 5.2.5 SPx the full text search engine was using Verity which has been now changed to FAST (Fast Search & Transfer) in 5.3 SPx onwards and xPlore replaced further.

In Verity we have to explicitly define the attributes to be indexed, in the content server configuration, whereas one of the salient features of FAST is that by default, all the attributes are indexed along with the content of the document. Since, FAST is no longer tightly coupled with the installation of the content server; one has the option of not installing Index
 Server. If the Full Text Index server is not installed, simple search will perform a case sensitive database search against object_name, title and subject attributes of dm_sysobject and its subtypes.

This post describes the various components of Index Server and their operations.

1.   Software Components

Full-text indexing in a Documentum repository is controlled by three software components:

  • Content Server, which manages the objects in a repository, generates the events that trigger  full-text indexing operations, queries the full-text indexes, and returns query results to client applications.
  • The index agent, which exports documents from a repository and prepares them for indexing.
  • The index server, which is a third-party server product that creates and maintains the full-text index for a repository. The index server also receives full-text queries from Content Server and responds to those queries.

2.   Set Up Configuration

a)  Basic Set Up

The basic indexing model consists of a single index agent and index server supporting a single repository. The index agent and index server may be installed on the Content Server host or on a different host.

b)  Consolidated Set Up

In a consolidated deployment, a single index server provides search and indexing services to multiple repositories. The repositories may be in the same Content Server installation or on different hosts. However, all repositories must be of the same Content Server version.

3.   Index Server Processes

The index server consists of five groups of processes that have different functions.

a) Document processors

Document processors (also sometimes called procservers) extract indexable content from content files, convert DFTXML to FIXML (a format that is used directly by the indexer), and merge the indexable content with the metadata during the DFTXML conversion process. Document processors are the largest consumer of CPU power in the index server.

b) Indexer

The indexer creates the searchable full-text index from the intermediate FIXML format. It consists of two processes. The frtsobj process interfaces with the document processor and spawns different findex processes as necessary to build the index from FIXML.

c) Query and Results servers

The QR Server (Query and Results Server) is a permanently-running process that accepts queries from Content Server, passes queries to the fsearch processes, and merges the results when there are multiple fsearch processes running.

The index server can run in continuous mode or in a special mode called suspended mode. In suspended mode, FIXML is generated for any updates
to the index but not integrated into the index. When the index server is taken out of suspended mode, the index is updated. Running in suspended mode; speeds up the indexing process. Suspended mode should be used when the requirement is to index large volume of documents or to re-index an entire repository.

4.   Health check up for Index Server processes

Execute the following command through command prompt

nctrl sysstatus

This will list all the Index Server process with their status.

Another option is to use Index Server admin console through <a href=”http://localhost:/admin”>http://localhost:<portno.>/admin and navigate to “System Management” tab.

Navigate to “Matching Engines” tab for details on total no. of documents in all the filestores (if there are multiple repositories) and the no. of documents processed by Index Server. It also provides a link to Index Server log file.

5.   How to determine Index Server ports

Using the Index server base port we can determine the ports for various Index Server processes

Index Server admin console: Base Port + 3000

FAST Search console: Base Port + 2100

6.   Fulltext indexing queue messages

When a document has been marked and submitted for fulltext indexing, it is queued to the Index Agent/Index Server.

The fulltext index status can be checked by the following dql query:

select sent_by, date_sent, item_name, content_type, task_state, message from dmi_queue_item where item_id = ”

The task_state can have one of the following values:

‘’ – The item is available to be picked up by an Index Agent for indexing.

‘acquired’ – The item is being processed. If an Index Agent stops abruptly, a queue item can be left in this state until the Index Agent is restarted or an Administrator clears the queue item.

‘warning’ – The item was indexed with a warning. Often it indicates that the content of the object failed to index but the meta-data was successfully indexed.

The ‘message’ attribute and the Index Agent log will have further details.

‘failed’ – The item failed to index, please refer to the ‘message’ attribute and the Index Agent log for more information.

‘done’ – Successfully indexed the item

7.   Index Agent Modes

An index agent may run in one of three operational modes:

normal

In normal mode, the index agent process index queue items and prepares the SysObjects associated with the queue items for indexing. When the index agent successfully submits the object for indexing, the index agent deletes the queue item from the repository. If the object is not submitted successfully, the queue item remains in the repository and the error or warning generated by the attempt to index the object is stored in the queue item.

migration

In migration mode, the index agent processes all SysObjects in a repository sequentially in r_object_id order and prepares them for indexing. A special queue item, the high-water mark queue item, is used to mark the index agent’s progress in the repository.

An index agent in normal mode and an index agent in migration mode cannot simultaneously update the same index.

file

In file mode, a file is used to submit a list of objects IDs to the index agent when a new index is created and index verification determines which objects are missing from the index.

8.   Switching modes of Index Agent

At the time of Index Agent set up the wizard gives an option to start the Index Agent under “Normal” or “Migration” mode.

The following steps should be performed to change the Index Agent from one mode to another.

1. Login to Index Agent Admin console through http://localhost:<index agent port no.>/IndexAgent<no.>/login.jsp

e.g., http://localhost:9081/IndexAgent1/login.jsp

2. Stop the Index Agent

3. Now change the Index Agent mode from Normal to Migration or Migration to Normal as the case may be.


4. Click on OK

5. Start the Index Agent again.

Note:

i) While in Migration mode Index Agent doesn’t appear in DA under Indexing Management tab. On the Index Agent admin screen it will provide the details of the no. of documents processed out of the total no. of documents.

ii) If the Index Agent service is restarted from services console then start the Index Agent from Index Agent Admin console or through DA under Indexing Management tab.

9.   Re-configuring Index Agent and FAST

Configuring another IA and FAST to a repository previously configured to work with one IA and FAST doesn’t modify dm_ftengine_config object and IA fails to start displaying error to connect to old FAST machine.

To resolve:

Manually update the dm_ftengine_config object based on the settings from the new machine

1. Go to IAPI and execute the following API –

iapi> retrieve,c,dm_ftengine_config

2. Note the object_id retrieved by the above API and use it to execute the following API –

iapi> dump,c,l

3. In the dump results note the following param_name, param_value pairs

fds_base_port should match 13000 or the base port number for Index Server Install

fds_config_host should match the host name where the Index Server is installed.

and so on….

4. The param_name/param_value pairs should be changed to match the values for the index server install.

5. Delete the following via dql:

delete dm_ftengine_config object where r_object_id = ‘old_value’

delete dm_ftindex_agent_config object where r_object_id = ‘old_value’

6. Run the index agent configuration program to create new index agent.

10.   Relocating fulltext indexes in Index Server

The following steps describe how we can change the location of fulltext indexes

 1. Shutdown the Index Agent

 2. Shutdown the Index Server

 3. Copy the indexes to the target location ( both the fixml and the index directories)

 4. Default location of the fixml and index directories:

      for Windows – %DOCUMENTUM%/data/fulltext

      for Unix –$DOCUMENTUM/data/fulltext

 5. Edit the following:

 In Windows:  %DOCUMENTUM/%fulltext/IndexServer/etc/searchrc-1.xml. Change the “index path”

 In Unix: $DOCUMENTUM/fulltext/IndexServer/etc/searchrc-1.xml. Change the “index path”

6. Edit the following:

 In Windows: %DOCUMENTUM%/fulltext/IndexServer/etc/config_data/RTSearch/ webcluster/rtsearchrc.xml. Change fixmlpath and fsearchdatasetdir to the new path

 In Unix: $DOCUMENTUM/fulltext/IndexServer/etc/config_data/RTSearch/webcluster/rtsearchrc.xml. Change fixmlpath and fsearchdatasetdir to the new path

 7. Startup IndexServer, Index Agent

11.   dm_FTCreateEvents Job

The Create Full-Text Events tool (dm_FTCreateEvents) may be used in two ways:

 a) To complete an upgrade by causing any objects missed by the pre-upgrade indexing operations to be indexed.

 The job generates events for each index able object added to a repository between the time a new 5.3 or later full-text index is created for a 5.2.5 repository and when the repository is upgraded to 5.3.

 This is the out-of-the-box behavior of the job.

 b) To generate the events required to re-index an entire 5.3 SP1 or later repository.

 Re-indexing the repository does not require deleting the existing index.

Please refer to the screenshot for the configuration of dm_FTCreateEvents Job –

To generate the events required to re-index an entire 5.3 SPx or later repository the -full_reindex argument must be set to TRUE to generate the required events.

The first time the job runs in its default mode, the job determines the last object indexed by an index agent running in migration mode and the date on which that object was indexed. The job searches for objects modified after that date and before the job runs for the first time and generates events for those objects. On its subsequent iterations, the job searches for objects modified after the end of the last iteration and before the beginning of the current iteration.

Before the job is run in a 5.3 SP1 or later repository with argument –full_reindex set to TRUE, you must create a high-water-mark queue item (dmi_queue_item) manually using the API –

create,c,dmi_queue_item

save,c,l

and specify the r_object_id of the queue item as the -high_water_mark_id argument of the dm_FTCreateEvents Job.

In case you get the following error message in the job’s report –

FTCreateEvents was aborted. Error happened while processing job. Error: No high water mark found for qualification:

Verify the -high_water_mark_id attribute and check whether the API was executed after installation or re-installation of the index server to get the required r_object_id argument.    

Disable the job if the application is not using Full Text Index Server.

The job can also be de-activated if the following events are registered for dm_fulltext_index_user’

  • dm_save
  • dm_destroy
  • dm_readonlysave
  • dm_checkin
  • dm_move_content

Execute the following query to verify the same:

select event from dmi_registry where user_name=’dm_fulltext_index_user’ 

12.   Using FT Integrity Tool

Modify the parameter file:

a) Login to the index server. Navigate to the parameter file location.
On Windows, Drive:\ProgramFiles\Documentum\IndexAgents\IndexAgentN\webapps\IndexAgentN

b) Open the ftintegrity.params.txt file in a text editor.

The first line is  -D repositoryname

where repositoryname is the repository for which you created a new index.

c) Add the following two lines immediately after the first line

-U username

-P password

where username is the user name of the Superuser whose account was used to install the index agent and password is the Superuser’s password.

 d) Save the  ftintegrity.params.txt file to %Documentum%\fulltext\IndexServer\bin  (Windows).

Sample FT Integrity params file

Note: The instruction above save a Superuser name and password to the file system in a plain text parameter file. For security reasons, you may wish to remove that information from the file after running the FTIntegrity tool. It is recommended that you save the parameter file in a location accessible only to the repository Superuser and installation owner.

 To run the index verification tool:

  1. Navigate to %Documentum%\fulltext\IndexServer\bin (Windows) or
  2. To verify both completeness and accuracy, open a command prompt and execute

           cobra ftintegrity.py -i ftintegrity.params.txt -m b

  1. To verify completenes only, open a command prompt and execute

           cobra ftintegrity.py -i ftintegrity.params.txt -m c

  1. To verify accuracy only and query all indexed objects, open a command prompt and execute

           cobra ftintegrity.py -i ftintegrity.params.txt -m a

FT Integrity generates 3 reports

res-comp-common.txt – object id of all documents that are found in both index and repository.

res-comp-dctmonly.txt – object id of documents that are in repository but not indexed

res-comp-fastonly.txt –object id of documents in index but not in repository.

It also generates ftintegrityoutput.txt file which is nothing but the console output generated in the text format.

 To resubmit objects that failed indexing:

a) Navigate to %DOCUMENTUM\fulltext\indexerver\bin.

b) Copy the res-comp-dctmonly.txt file to

drive:\Program Files\Documentum\IndexAgents\IndexAgentN\webapps\IndexAgentN\WEBINF\classes.

c) Rename the res-comp-dctmonly.txt file to ids.txt

The index agent periodically checks for the existence of ids.txt. If the file is found, the objects are resubmitted for indexing.

13.   Moving From Index Server 5.3 SPx to D6.5 SPx

 a) Index Server D6.5 can use indexes created by a 5.3 SPx Index Server. This reduces the overhead of indexing the entire repository during upgrade.

 While uninstalling Index Server 5.3 SPx leave the checkbox for deleting the Indexes unchecked. Then during D6.5 Index Server installation point it to the existing indices folder as data folder.

b) While upgrading from 5.3 SPx to D6.5 Index Server the data folder, if the indices from 5.3 are preserved, should be on the same drive as the Index Server home directory.

c) The path for attribute mapping xml is $Documentum\ fulltext\fast as against $Documentum\fulltext\fast40 folder in 5.3 SPx

 This path should be correct in dm_ftengine_config object for index based search to function properly.

d) Index Agent in D6.5 uses 20 consecutive ports as against 1 port in 5.3 SPx. i.e. If Index Agent 1 is running at port 9081 then Index Agent 2 cannot take any ports from 9081 to 9100.

e) D6.5 doesn’t provide an option to switch Index Agent between different modes using Index Agent admin console.

f) The params file for using FT Integrity tool is placed under $Documentum\jboss4.2.0\server\DctmServer_IndexAgentN\deploy\IndexAgentN.war

Accordingly the ids.txt file should be placed under $Documentum\jboss4.2.0\server\DctmServer_IndexAgentN\deploy\IndexAgentN.war\WEB-INF\classes

Hope this Post is useful for all those developers who needed more and step by step info on Index Servers.

Publish Content to a website using Documentum Web Publisher

September 30, 2011 1 comment

Today’s global companies produce an enormous amount of content.  Web sites and portals are the first avenue to distribute this business information to all major stakeholders. Web content management has become a primary strategy to help organizations communicate more effectively with their key audiences. Ineffective web content management can significantly undermine company messaging, decrease sales, increase staffing requirements, and raise operational costs and risks.

This post briefly discusses the need for web content management and the solution provided by documentum. It also describes how to publish content to a website using Documentum Web Publisher.

Documentum Web content management solution

Documentum provides an enterprise content management approach for managing all unstructured data including documents, web pages, XML, and rich media throughout the organization. Documentum web content management system is built on this underlying architecture to support management and publishing of all unstructured content types.  It can drive down costs, simplify the management of multiple sites, and increase productivity for the creation, approval, and publishing of content, ultimately delivering a superior user experience to raise customer satisfaction and revenues.

  Key components:

  • Web Publisher

Web Publisher is a browser-based application that simplifies and automates the creation, review, and publication of web content. It works within Documentum 5, using Documentum Content Server to store and process content. It uses Documentum Site Caching Services (SCS) to publish content to web.  Web Publisher manages web content through its entire life: creation, review, approval, publishing and archiving. Web Publisher also includes a full complement of capabilities for global site management, faster web development using templates and presentation files, administration. Web Publisher can be integrated with authoring tools to develop web sites. 

  • Documentum Content Server

Content Server stores content and meta data in a repository called docbase. It provides full set of content management services, including library services (check in and check out), version control, archiving options and process management features such as workflows and lifecycles. It also provides secure access to the content stored in the repository.

  • Site Caching Services

Documentum Site Caching Services  (SCS) publish documents directly from a docbase to a web site. It extends the capabilities of content server. It has two components, source software and target software. The source software has to be installed on content server host and target software on web server. SCS chooses which content to publish and to what location according to the parameters in a publishing configuration. User can create publishing configurations in Documentum Administrator.

  • Site Deployment Services

Site Deployment Services, retrieves the web site from the Site Caching Services repository and deploys the site to multiple servers or Internet Service Providers.

Related Solutions:

The Documentum web content management solution is strengthened by complementary products that address more sophisticated web challenges such as rich media authoring environments, better searching and navigation, better collaboration, portals, and records management for compliance.

  • Content Intelligent Services

            Content Intelligent Services provides better searching, navigation and personalization for the large amount of web content.

  • Content Rendition Services

Content Rendition Services automates the conversion of standard desktop document formats into Web-ready formats such as PDF and HTML and stores the renditions in a Documentum repository alongside the original.

  • Content Media Services

Content Media Services performs all analysis and transformations activities for any media file format.

  • Web Publisher Portlets

This is the portal solution offered by documentum.  Web Publisher portlets allow users to participate in fundamental, content-based business processes without leaving their familiar portal environments or learning a new application and include three out-of-the-box portlets that are also fully customizable: My Web Publisher, Submit Content, and Published Content.

  • My Web Publisher – provides a personalized view, allowing users to easily see vital information such as the number of unread tasks and notifications
  • Submit Content – displays all Web Publisher templates that an end user is allowed to access and summarizes files that have been created, published, and checked out
  • Published Content – provides users with a list of documents in an active published state and grouped by category such as announcements, corporate news, or human resources (HR)
  • Record Management Solution – Record management solution helps companies to comply with regulations governing electronic information.
  • eRoom –  eRoom is a web based collaboration tool that allows people to work together on content, projects, and processes both within the enterprise and beyond. This may include external entities such as partners, suppliers, customers, and clients.
  • Inter enterprise workflow services – With Inter enterprise workflow services, documentum workflows can be extended across a company’s firewall to include business partners. It also enables integration of documentum workflows with workflow engines, including EAI and BPM systems as well as with workflows from other enterprise applications. 

Publishing Content to a Website

Traditionally web teams or IT departments manage web sites manually. Web teams are overwhelmed with demands to constantly publish new content and ensure higher quality standards while managing hundreds of thousands of web pages across external websites and portals. To overcome these challenges, organizations need to empower business content owners to author and publish content. Providing content templates can help in maintaining brand integrity across all sites. Companies should also ensure that content is reviewed and approved before it is published.  Organizations can achieve consistency and quality with out burdening web teams with costly and time intensive manual updates by having a web content management solution.  We will see how this can be achieved using documentum as a web content management tool.

Web teams can create web sites using Web Publisher. Web sites created are stored in web cabinets. Web administrator can create groups, which define specific job functions like content authoring, reviewing and add users to these groups. Web developers can design content templates, assign a life cycle and workflow to the template and make it available for use via web using Web Publisher. Life cycle identifies the state of a document.  Web Publisher default life cycle has following states: Start, WIP, Staging, Approved and Active.                       

  • Start

When content is newly created or newly versioned, Web Publisher places it in the Start state, for initialization purposes, and then immediately promotes it to the WIP state.

  • WIP (Work In Progress)

Content in draft or review.

  • Staging

Content that is complete and ready for testing on a staging Web site. By default, Web Publisher does not allow users to modify a file’s content, location or properties if the file has advanced to the Staging state or beyond.

  • Approved

Content that is approved for the active Web site but has not yet reached its publication date (i.e., effective date).

  • Active

Content that is on the active Web site.

A workflow defines activities to be performed on content. It defines the users who will perform the set of activities. Workflow can also include automatic tasks, which are performed by the system. For example, an automatic task might promote a file to a new lifecycle state. Using Web Publisher, web teams can create workflow templates, which can be later be reused for any content type.

Content authors can create content based on content templates to which a life cycle or workflow is assigned. The content templates help companies to maintain brand integrity. Review and approval of content can be automated using workflow. Thus enterprises can control what content is created, by whom, and in what manner.

Once the content is approved it has to be published. Content owners can publish content using Web Publisher. For this, a publishing configuration has to be created for the web site. This is done using documentum administrator. User can have separate publishing configurations for different life cycle stages (WIP, Staging and Active stages) for each web cabinet. Each publishing configuration publishes to a separate target location: the WIP and Staging sites are for internal testing; the Active site is the live web site. Users would access the WIP and Staging sites through the Web Publisher preview command or a URL.

If a web site is created in multiple file formats or languages, use the publishing configuration to determine what format or language is published to a given web server. For example, suppose product.htm has three renditions: product.htm, product.xml, and product.wml. User can create two publishing configurations for the site: one that publishes HTML, GIF, and CSS files, and another that publishes WML files (Product.xml is used for development and is not published).

When a publishing configuration is created, SCS automatically creates a publishing job.

         SCS publish operation can be initiated when any of the following occur:

  • When the publishing job’s regular interval occurs.
  • When a user manually publishes content through the Publish command.
  • When content is manually or automatically promoted to Staging or Active state. Promotion initiates the publishing operation only if the web site is configured to use synchronous publishing. Manual promotion occurs when a user either promotes content to the next lifecycle state or power promotes content to the Approved lifecycle state. Automatic promotion occurs when Web Publisher promotes content through an automatic workflow task or through the arrival of the content’s effective date. If a web page reaches the approved state after the effective date is met, the page is published the next time the site is updated.
  • When a user previews content in the WIP or Staging states to see how it will appear on the web. Web Publisher initiates the publishing operation if the content has been modified since the last publishing job ran.

Web Publisher removes web pages from web sites when the pages meet their expiration dates. Steps to create and publish content to a website   are given below.

Steps to Create and Publish Content to Website

  1.       Log in to Web Publisher as administrator.
  2.       Go to Administration->User Management-> Users
  3.       Add users to docbase. Refer Web Publisher help for more details.
  4.       Add users to content author, content manager, administrator groups
  5.       Assign the Client Capability of content author user to contributor.
  6.       Assign the Client Capability of content manager user to coordinator.
  7.       Assign the Client Capability of administrator to system administrator.
  8.      Create a workflow template using workflow manager.  User can either use the desktop version of workflow              manager or web version that can be accessed from Web Publisher.

Web Publisher provides default workflow templates. For e.g. Submit to Web site, which is a simple workflow, is used to publish content to a web site. When starting this workflow, the content manager specifies an author to work on the content. The task appears in content author’s inbox. Content author modifies the content and forwards it to the manager for review. Web Publisher promotes the content from WIP to Staging prior to review. If the reviewer rejects the task, Web Publisher demotes it to WIP and routes it back to its originator. If a reviewer approves the task, Web Publisher routes the content to an approver and the content is promoted to Approved Stage. Once content is approved it is automatically promoted to Active stage and is published to the website.

To start creation of a workflow template using desktop version of workflow manager

a) Open Workflow Manager.

b) Log in to Workflow Manager as Web Publisher administrator. A Web Publisher administrator should have superuser permissions.

c) Choose File->Open. Browse to System->Applications->Web Publisher folder and select a Web Publisher default workflow.

    For example, Submit to Web site. This opens a default Web Publisher workflow on which to base custom workflow.

d) Choose File->Save As and save the workflow with a name that represents the workflow.

                All workflows must be saved in System->Applications->WebPublisher-><user_defined_workflow_folder>.

               Create a new folder or save workflows to the Web Publisher root folder.

e) Validate the template.

f) Install the new workflow template

g) Make it available through Web Publisher.

For more information on creating workflows, refer to Workflow Manager User Guide.

To access workflow manager from Web Publisher, log in to Web Publisher as a Web Publisher administrator.

Select Web Publisher Admin->Workflow templates and repeat the steps from c to g

9.     Create a new category under Templates in Web Publisher

          10.   Import a template to the new category that is created. Template provides layout for content.

          11.    Assign default life cycle and the newly created workflow to the template.

          12.    Make the template available for use.

          13.   Create a web cabinet in Web Publisher

          14.   Create a folder in web cabinet

          15.   Create content using the newly created template in Web Publisher

            16.  Create a site publishing configuration in documentum administrator

                          i) Start Documentum Administrator and connect to the docbase as a superuser.

                          ii) Click Site Publishing.

                          iii) Create a new site-publishing configuration.

                          iv) Set values in the site-publishing configuration:

                             a) Click Active.

                             b) In the Version field, type Active.

                             c) Click Publishing Folder and browse the Docbase to the website folder (web cabinet).

                             d) Type the target host name. This is the host where the SCS target software is installed.

                             e) Type the target port for making connections to the target host. The port entered must match the port          

                                 specified at the time of installation of SCS (DefaultPort, 2789)

                             f) Type the target root directory to which user wants to publish the content. To publish web pages to

                                  Apache Tomcat, give this as target directory:

                                         C:\Program Files\Apache Group\Tomcat 4.1\webapps\ROOT

                            g)  Choose the connection type.

                            h)  Click the Advanced tab.

                            i)  Select a new export directory or leave the default unchanged.

                            j)  Type the transfer user name, password, and domain. Enter the transfer authentication domain name that

                                 was provided during SCS target installation. Enter a valid username and password.

                           k) Click Ok.

17.  Log in to Web Publisher as content manager

18.  Start the new workflow using Web Publisher

a.      Navigate to the content created.

b.        Select the checkbox.

c.         Go to Tools

d.        Select Workflow -> Start  

e.      Assign a content author and web admin

19.  Workflow tasks will appear in user’s inbox

20.  User can accept and forward the task to next user or reject it

21.  Content is automatically published to web site once it is approved.

I would conclude in writing  Documentum provides an enterprise approach to transform online presence and drive ROI. It provides an easy-to-use, browser-based interface that empowers non-technical users to easily create, manage, and publish content for multilingual web sites and portals. This solution is suitable for medium to large size companies.

%d bloggers like this: