When creating an interface group you have to enter value for following fields:
- Interface Group Name
- Integration
- Type
- Priority
These property are mandatory and marked with red colour.
Optional fields are:
- Package Size
- Number of Records Per Batch
Interface Group Name #
This is the name of the interface group. The name will be used as a part of the queue name. Here is an example of a queue name: a0IF000000IsGB7MAN-EO_ContactBalint_6. In this example the interface group name is “CantactBalint_6”.
Integration #
This is the integration. You create an interface group which belong to an integration. Only one interface can belong to an interface group and an interface group only also belong to an integration. The value of the integration you select here will be part of the queue name.
Here is an example of a queue name: a0IF000000IsGB7MAN-EO_ContactBalint_6. In this example you have specify in this field the integration Id = a0IF000000IsGB7 as value.
[su_box title=”Note” box_color=”#2a8af0″ title_color=”#000000″]
interface group belong to same integration. Interface group only contain the interfaces from the same integration. It is not possible to put interfaces from different integration to the same group.
[/su_box]
Type #
This is the type of the interface group. The value can be {EO, EOIO}.
- EO = Exactly Once
- EOIO = Exactly Once in Order
When you create an interface group with type EO this interface group will be processed in a exactly once manner e.g. the order will not be guaranteed. If you want to keep order sending from the sender application you have to define this group with EOIO.
Note that both type define a processing which is asynchronous. The type will be part of the queue name.
Here an example of a queue name a0IF000000IsGB7MAN-EO_ContactBalint_6. In this example type is EO.
Priority #
You have to specify a priority which can be one of this value set {High, Medium, Low}.
With this property you can define that interface group A is to be processed faster as interface group B by setting for interface group A = High and interface group B = Low or Medium.
Package Size #
With this property you can bundle or package the attachment together. If you don’t put any value to this field the processing will take the number of records found in the attachment. Let looks to some example to understand this property.
For the first scheduler run the scheduler will bundle 2 attachment in one package an pass over to the worker to post the data. Note that the term scheduler and worker will be explain in detail in chapter “11 Task separation between Scheduler and Worker”. For a short explanation we can say that the scheduler just schedule the data package but don’t do the processing. It will pass the data package to the worker who will post the data into the application e.g. into the object account.
In the first scheduler run the scheduler create a package with size of 20 records. This is equal to bundling of two attachment together. The scheduler pass this package (20 records) to the worker and the worker process this package. What you can see here in comparison to the first example above is that the worker now get bigger data package to process. The remaining total records is now 50.
In the second scheduler run the scheduler again create a package with size 20 records. It pass to the worker and the remaining total records is now 30. In the third run again package with size 20 records is created and pass to the worker. The remaining total records is 10.
In the fourth scheduler run the scheduler create again a package of 20 records but since in the working basket we only have total 10 records left the last package will just containing 10 records.
Let have a look between the first and second example. In the first example we have in total 7 scheduler run meaning 7*4 minutes = 28 minutes processing time. In the second example we have 4 scheduler run meaning 4*4 minutes = 16 minutes processing time.
Now let us look to the third example where we define a package size to 40.
Here for the first scheduler run it will create a package with 40 total records. The remaining total records is 30. The second scheduler run will test the rest of 30 total records and pass to the worker.
In total we need only 2 scheduler run in this example meaning 2*4 minutes = 8 minutes of processing time.
Now in the last example we define the package size to 80.
Now we need only one scheduler run to process 7 attachment a 10 records.
Whenever the data size of an attachment allows to build package because the size e.g. the number of total record within the attachment is small then you should define this property to a value which will be reasonable for you. We can give you an recommendation of a fix value for the package size because it depend on the characteristics of your data e.g. how many fields is in the record, how big is a field etc.. You have to try and error yourself to find the optimal size of the package size.
If you define a too big package size you will run into a heap size limit exception. If you define a too small package size you will lose performance. Therefore the best package size will be between this value.
Number of Records Per Batch #
Processing data in Salesforce is bound to limits which you have to care about otherwise you will get an error and your data will not be posted. One of the limitation is here the heap size while processing big data. If you pass in an attachment 500 records of account it will not be possible to process those 500 records in one batch. If you do this you will get the heap size limitation exception.
To avoid this the skyvva engine will process the records from an attachment in so called batches or packages. The default value is 50 records per batch. If you don’t define any value the default of 50 will be used. If you define an new value than the define value will be use.
Note that we do not recommend to use other value because we have test with the default value at different customer and see that this value is a good enough for many customer. If you define a new value you have to care about the size of the data e.g. the number of fields in your records and the size of each field. So you have to balance between the size of the record to find out the right value for this field.
If you have performance problem posting the data you can think about the play around with other value than the default value of 50 for example start for the value 100 and test if you get the heap size limit exception or not. You have to test it thoroughly before putting this setting into your production!