Implement these changes and continuously keep an eye out for new features/ways to improve the efficiency of your dashboards.
It benefits both you and your users.
If you enjoyed this follow me:
@brosjay.bsky.social
for more data-related content.
Implement these changes and continuously keep an eye out for new features/ways to improve the efficiency of your dashboards.
It benefits both you and your users.
If you enjoyed this follow me:
@brosjay.bsky.social
for more data-related content.
Start with a clear purpose, consolidate tables, optimise relationships and DAX measures, and prioritise the user experience.
Regularly review and refine your model to keep it in top shape.
Start with a clear purpose, consolidate tables, optimise relationships and DAX measures, and prioritise the user experience.
Regularly review and refine your model to keep it in top shape.
Make sure your report is intuitive for users.
Organize visuals logically, provide clear instructions, and use bookmarks or tooltips for guided navigation.
A user-friendly report reduces confusion.
Make sure your report is intuitive for users.
Organize visuals logically, provide clear instructions, and use bookmarks or tooltips for guided navigation.
A user-friendly report reduces confusion.
As your report evolves, revisit the data model.
Remove unused tables, columns, or measures.
Refactor DAX code for clarity and efficiency.
Keeping your model tidy is an ongoing process.
As your report evolves, revisit the data model.
Remove unused tables, columns, or measures.
Refactor DAX code for clarity and efficiency.
Keeping your model tidy is an ongoing process.
Simplify DAX measures by breaking complex calculations into smaller, reusable parts.
Organise them into groups for easier management.
Keep measures concise and named appropriately for easy understanding.
Simplify DAX measures by breaking complex calculations into smaller, reusable parts.
Organise them into groups for easier management.
Keep measures concise and named appropriately for easy understanding.
↔️ Ensure relationships are simple and logical.
Avoid creating circular relationships or overly complex chains.
Stick to one-to-many and many-to-one relationships wherever possible.
↔️ Ensure relationships are simple and logical.
Avoid creating circular relationships or overly complex chains.
Stick to one-to-many and many-to-one relationships wherever possible.
While calculated columns can be useful, excessive use can slow down your model.
Reserve them for essential calculations; move the rest to measures or even source queries for better performance.
While calculated columns can be useful, excessive use can slow down your model.
Reserve them for essential calculations; move the rest to measures or even source queries for better performance.
Avoid creating too many tables.
Consolidate similar data into fewer tables to simplify the model.
Fewer tables mean easier navigation and less overhead.
Avoid creating too many tables.
Consolidate similar data into fewer tables to simplify the model.
Fewer tables mean easier navigation and less overhead.
Define the key objectives of your report.
Knowing what you want to achieve helps you focus on the necessary data and relationships, avoiding unnecessary complexity.
Define the key objectives of your report.
Knowing what you want to achieve helps you focus on the necessary data and relationships, avoiding unnecessary complexity.
If you got something from this thread, consider following me: @brosjay.bsky.social
I post data-related content daily.
If you got something from this thread, consider following me: @brosjay.bsky.social
I post data-related content daily.
TL;DR:
- Handle duplicates and missing data
- Fix naming conventions in values and columns.
- Validate and test - does it make sense/look right?
TL;DR:
- Handle duplicates and missing data
- Fix naming conventions in values and columns.
- Validate and test - does it make sense/look right?
This is the most crucial part.
Do some test aggregations, and visualisations and ask yourself:
Does the data make sense?
Does it prove/disprove your theory?
Is there anything that doesn't "feel" right when observing?
This is the most crucial part.
Do some test aggregations, and visualisations and ask yourself:
Does the data make sense?
Does it prove/disprove your theory?
Is there anything that doesn't "feel" right when observing?
Why?
Because to perform any calculations or grouping having missing data can cause complications.
It's important to understand how you want to handle missing data.
You could remove them or fill them with a value.
Why?
Because to perform any calculations or grouping having missing data can cause complications.
It's important to understand how you want to handle missing data.
You could remove them or fill them with a value.
Sometimes outliers give us a crucial bit of info to base our work on.
Sometimes they are truly an anomaly (something odd or different to what's expected).
But it's important to work out whether it's valid or not, and then deal with it appropriately.
Sometimes outliers give us a crucial bit of info to base our work on.
Sometimes they are truly an anomaly (something odd or different to what's expected).
But it's important to work out whether it's valid or not, and then deal with it appropriately.
Consistency is key here.
You might have 0 and "Zero" - it's worth fixing and grouping these values correctly.
This applies to column names as well as values - make it easy for anyone to understand.
Consistency is key here.
You might have 0 and "Zero" - it's worth fixing and grouping these values correctly.
This applies to column names as well as values - make it easy for anyone to understand.
Duplicate values are most common when you combine datasets.
If you sell books and want to analyse how the fiction genre is performing, irrelevant data (for this project) would be non-fiction so they can be removed.
Duplicate values are most common when you combine datasets.
If you sell books and want to analyse how the fiction genre is performing, irrelevant data (for this project) would be non-fiction so they can be removed.
But it is important to understand the different components so you can spot scenarios to apply the most relevant techniques.
Let's get stuck in...
But it is important to understand the different components so you can spot scenarios to apply the most relevant techniques.
Let's get stuck in...
Especially when working with more than one dataset (joining or merging), there are many opportunities for data to be duplicated or mislabelled.
Especially when working with more than one dataset (joining or merging), there are many opportunities for data to be duplicated or mislabelled.
How can you have "dirty data"?
How can you have "dirty data"?