Salesforce allows developers to create data models for applications with a few simple clicks, making the process faster and easier. However, Salesforce Architects should always design the data models with precision to provide the organization with the required features through the concerned application. Salesforce data model designs can be simple or complicated based on the objectives to be achieved by the organization.
An effective data model design helps you make your Salesforce application more scalable, flexible, and accessible. It helps you optimize your Salesforce database and use the same to provide personalized services to your customers.
On the other hand, poor data model design would result in inadequate flexibility and scalability of the application. Although not visible immediately, signs of a poor data model design would become evident once you try to scale your application because of the following reasons:
- Increase in the volume of data to be managed
- Increase in the number of processes operating on Salesforce objects
- Increase in the number of system integrations and interfaces
- Increase in the number of Salesforce users
This makes it important to be vigilant and precise while designing the data model for your Salesforce applications. An architect should always enumerate all the assumptions made while designing a data model and maintain a document for the same. This acts as a reference for making data model design decisions in the future.
In this blog, we would discuss the important considerations to keep in mind while creating data model designs for your Salesforce applications.
Be Well-versed With The Out-of-the-box Objects
Salesforce provides users with a range of out-of-the-box objects in different areas, including sales, service, pricing, quoting, marketing, billing, field service, commerce, community, and more. As an architect, what you need is likely to be present within these objects that span across multiple industries and sectors for creating robust Salesforce applications.
It is always advisable to review these objects carefully before creating a Salesforce data model design. This would help you understand different personas and business processes supported by your Salesforce data model. Architects often miss out on important capabilities and features of the platform by taking these out-of-the-box objects lightly.
Let us assume that an architect is not aware of the Case object in Salesforce and they ended up creating their own custom case object to provide customer support. Despite creating a custom object, the architect would not be able to take advantage of built-in features such as case assignment rules, web-to-case, case teams, escalation rules, and more. This prevents the architect from optimizing the Salesforce environment to create a powerful Salesforce application.
Focus On Data Volume
Data volume plays an important role in designing data models for Salesforce applications. While defining your data model, it is important to forecast the volume of data every object is likely to aggregate in the future. Architects often write down the growth rate assumptions used for calculation, especially when it comes to objects handling large data volumes.
The use of large data volumes is likely to make the app performance sluggish. This results in slower queries, sandbox refreshes, and search/list views. If you design your data model taking into account large volumes of data from the start, you would be able to get rid of such inconveniences.
Here are some important considerations to keep in mind while working with large data volumes:
- Try indexing the fields in case of slow query performance
- Make use of the Lightning Platform query optimizer for improving the performance of your SOQL queries
- It is important to test your reports, custom code, and list views by loading large data volumes into full-copy sandboxes
- Resort to using big Salesforce objects if you require storing large volumes of data for audit and security purposes
If you are considering storing your data outside your Salesforce org, make use of frameworks like Salesforce Connect that help you in viewing, searching, and modifying data in external sources. This often proves to be effective when you require small volumes of datasets at any one point in time. This also helps you in obtaining real-time access to the latest database even if it is not stored within your Salesforce org.
Frameworks like these allow you to undertake seamless integration with the Lightning Platform. Here, you can avail yourself of external Salesforce objects, including lookup relationships, global search, record feeds, and the Salesforce mobile application.
Understand Normalization And Denormalization
To ensure smooth and effective data model design, it is important for an architect to be well-versed with the normalization and denormalization of their datasets.
Normalization refers to a technique used for organizing your datasets into different tables to reduce the redundancy of your data. Data redundancy is likely to increase the consumption of the data storage capacity of your system. Normalization helps you organize your data into smaller tables to deal with issues arising out of data redundancy.
On the other hand, denormalization is used by architects for designing data models with as few objects as possible. This allows them to read in data with lesser joins and simpler queries.
It is important for architects to choose between normalization and denormalization for designing ideal data models. Generally, this decision depends upon the user experience requirements, data security, data volume, the need for analytics, and other important factors. It is also important to note that your salesforce org stores metadata and field attributes in multiple tables. What you see as a Salesforce object is a virtual table. For creating SOQL statements, Salesforce generates optimized SQL involving complicated joins between index tables and metadata tables. If you tend to go overboard with normalization by creating unnecessary tables, it would only increase the load on your system.
Every data modeling and management technique has its own set of advantages and disadvantages. It is important for an architect to analyze the trade-offs carefully before making the final decision.
Select Ideal Data Types For Your Object Fields
The Salesforce Platform supports a range of different types of fields. While designing a data model for a Salesforce application, it is important for an architect to understand the features and limitations of every field with regard to reporting, data encryption, and field conversion. If you are willing to encrypt your data, always make sure that you review the standard and custom fields that need to be encrypted.
Moreover, it is advisable to consider reporting requirements while choosing data types. For example, an architect would often like to avoid using the picklist data type due to its limited reporting capabilities when it comes to grouping and filtering.
Also, it is important for an architect to ascertain which fields they would designate as external IDs with the attribute of External ID. This allows you to upsert data with the help of the external system’s identifiers. Salesforce allows users to use not more than 25 external IDs on an object. Make sure that you use this limit wisely.
Make Sure You Select The Right Relationships Between Salesforce Objects
Choosing the right Salesforce object relationships play an important role in designing the data models for Salesforce applications. This makes it important for architects to analyze the pros and cons of every major object relationship and select the one that best suits the requirements of the project.
Most architects end up choosing the master-detail relationship as it provides them with the following benefits:
- It provides them with out-of-the-box rollup summary formula fields that help them count and aggregate the child (detail) records
- It facilitates stronger links between Salesforce objects where the child records are automatically deleted with the deletion of a master record
However, it is important to be well-versed with all aspects of this relationship before choosing it for designing your data model. When a record on the master side of the relationship contains a large number of detail records, users are likely to encounter UNABLE_TO_LOCK_ROW errors. This is because every time a detail record is edited, the corresponding master record is locked. This results in more detail records leading to them being edited by users, thereby locking the master record.
It is also important to note that in the case of a master-detail relationship, the security of the child record is controlled by that of the parent record. This prevents you from having a separate security mechanism for child records.
On the other hand, a lookup relationship between Salesforce objects provides you with flexibility for using the platform sharing capabilities on child records as per your requirements. Also, these relationships give users an option to choose between required lookup and optional lookup.
Before you go ahead with any object relationship pertaining to Salesforce, make sure that it is in sync with the application you are willing to build and the objectives your organization is willing to achieve.
The Final Word
These were some of the most important considerations to keep in mind while designing a data model for Salesforce applications. If you are successful in building an ideal data model, it allows users to optimize their databases and streamline their processes with the help of the Salesforce application developed.
Post a Comment