MIM 2016: Paging in ECMA – C#

My approach to this blogpost will be different, focused on understanding the method of paging in the sync system rather just declaring code. Lets talk about the concept of paging in MIM ECMA. One of the reasons we page in MIM ECMA is there a limitation of 9,999 for an import operation. Normally here is what we do

  1. Declare several variables that match the data type of our schema attributes.
  2. Read from the remote directory into a collection of objects for that directory.
  3. Loop through the collection, process the attributes and for each object save to the CS. Consider this a hologram save, it has not been committed to the database.
  4. Return the result of our import operation to the sync engine which will cause the changes to be committed.

The last step is where the limit will be hit. If you have imported say 20,000 into the CS, gone through the steps above you will return a result of 20,000 to the sync engine and it will stop the import process cancelling all your operations with an error. It will not commit more than 9999 at a time.

We want to break up our import operations into pages or batches. Here is what we will do

  1. We are going to use the principle of store different datatypes in an Array, you can read more in this very good article. Declare a collection of your schema object. You can declare a public class. Declare several variables that match the data type of our schema attributes.

Example

Public Class MyPerson

{

    public string MyFirstname {get;set;}

    public string MyLastname {get;set;}

    public int MyEmployeeNum {get;set;}

                  }

2. At the beginning of the namespace, declare an array of the class object and a counter

Example

List<MyPerson> GetPersons

        Public int mycounter

3. In your connection function OpenImportConnectionResults, read from the remote directory into a collection of objects (MyremotePersonCollection) for that directory.

4. Declare an instance of the class array

Example

GetPersons = new List<MyPerson>()

5. Loop through the remote directory object collection, process the attributes and for each object save to the schema collection (GetPersons).

6. In your import function GetImportEntriesResults , reference the schema collection (because it was declared at the namespace level you can access it anywhere) and the importpagesize limit set in the sync GUI.

myPageSize = importRunStep.PageSize;


7.
Loop through the schema collection and for each one save to the CS, use mycounter as the index to the collection. Update the counter after each save.

8. When mycounter reaches the importpagesize limit or reaches MyremotePersonCollection.Count then stop the loop operation, return the import results with the option GetImportEntriesResults.MoreToImport set it that there is more to come based on if mycounter has not reached MyremotePersonCollection.Count.

9. The import function is called again by the sync engine. When mycounter matches MyremotePersonCollection.Count set the option GetImportEntriesResults.MoreToImport to false to indicate to the sync engine hat the import is finished.

Advertisements