CXL Institute Growth Marketing Minidegree review Part 7
Understanding Excel for marketers and Marketing Attribution
After an amazing learning session on Google Analytics and Google Tag Manager, we dived into understanding excel for marketers and attribution basics.
With CXL Institute I have learned so much about the topics I thought I knew, some great pro tips learnt with some of the best instructors and top 1% marketers
Let’s dive in to the learning material by understanding the excel basics:
- Sort and Filter: The basics of making sense from data starts with sorting and filtering data. Sort can be done on ascending and descending manner whereas with filter we can reduce the view of the overall data to see the exact variables we want to.
- Sum variations: I never knew the different sum variations.
a) SUMIF: sum based on one condition
b) SUMIFS: sum based on multiple conditions with the use of relative and fixed values.
- Count Variations: Count variations were really cool to understand.
a) Counta: It will return the values of text count (Add -1 if the text has header), count returns the number of count of numbers.
b) Countif: Count based on one condition
c) Countifs: count based on multiple conditions (similar to sumifs command)
- Table headers: We then talked about the general rule of keeping the headers containing numbers aligned to the right and the headers containing text aligned to the left. These rules can be changed in case of a filter is applied in which case the headers containing numbers can be aligned to centre. Furthermore, if we showcase the data into excel tables then any change or addition gets implemented throughout the table in the new formatting, this is an easy way to add formula throughout the table.
- Pivot Tables: I then got a refresher in pivot tables which can be created by data>pivot tables. We can then add rows and columns and filters to trim down our view. There are some really amazing features fro segmenting the data in the pivot table that I was unaware of:
a) Slicer: Analyse>Insert slicer. The slicer works well with limited category fields. With slicer, we can really see the filter applied. We can add slicer by going to data>slicer and then adding the column list we want to use. Slicer in sheets gives a great view of understanding what the filters are being added.
b) Filters: Filters can work great with a long list of category fields. Filter command in excel can be added through data>filter, in sheets it is down below once we are in the pivot table view. Filters in sheets work together.
c) Timeline: Timeline works only with the date field and you can specify months, quarters and years. Analyse>timeline. Timeline is unavailable for sheets.
d) Calculated fields: This was such a fresh find. For some of the calculations in pivot table the sum or the average might not make sense. So you can add a new calculated field as a part of the pivot table. You can do this by Analyze>fields,forms and sets>insert calculated field>create formula>set up the new field (in excel). In sheets, this can be done right from where we add the columns, choose the last option> calculated fields.
e) Filter view in sheets: We can create a filter view to easily navigate between different important filters. We can now easily switch and also share this by using the FVID and using the =Hyperlink(“URL”, “range_name”). This can be then pasted anywhere and the link will take to that particular view.
f) Power tips in Pivot table in excel: Sheets for pivot is a little less powerful as compared to the excel version. Some power tools that I learnt is:
— Start the pivot table after converting the data in a, excel table, this makes it easier to get the new data fields which will automatically show up when refreshed in the pivot table.
— The detailed view of a number in the pivot table can be learnt by clicking on the number that will open the composition of that number, if any. This does open a new sheet which can be controlled through Pivot table options. (Analyse>Options>data)
— You can create individual spreadsheets based on the filters. This can only be done when you have a small set of filters.
— The show value as can be used to correlate the data in many different ways. This is a great feature which is generally not used.
— Turning off auto fit columns: Analyse>Options>turn off auto fit option. This makes the experience smooth for everyone viewing the data when flipping through the filters.
— Pivot chart: Though this is a cool feature, I think this leads to more cleaning of the chart. However, pivot chart is tied to the pivot table which means all the filters and slicers are attached to the chart.
We then moved to different other features for excel and sheets including vlookup, let’s look at some of the cool points learned:
- I learned about removing duplicates, splitting the data into text to columns and also that there are always some other way to achieve a result in excel.
- Vlookup is such a powerful formula that can be used in so many different situations. We looked at how to reduce the impact of those error messages through the use of IFNA or IFERROR. This command helps to avoid #N/A returns and rather return a controlled message.
=VLOOKUP(search_key, range, index, [is_sorted])
search_key - The value to search for. For example,
range - The range to consider for the search. The first column in the range is searched for the key specified in
index - The column index of the value to be returned, where the first column in
range is numbered 1.
index is not between 1 and the number of columns in
#VALUE! is returned.
is_sorted - [
TRUE by default] - Indicates whether the column to be searched (the first column of the specified range) is sorted. FALSE is recommended in most cases.
3. Index and Match: Sometimes it might happen that the information you are looking for is not in the first column from the range. We can then use the combination of index and match where we can curb this challenge. With the match command we can locate where the range is. The index then uses this information and returns the value that we are looking for.
INDEX(reference, [row], [column]) - The array of cells to be offset into.
row - [OPTIONAL - 0 by default] - The number of offset rows.
column - [OPTIONAL -
0 by default] - The number of offset columns.
INDEX and MATCH gives us more flexibility when what we’re matching on is not in the very first column that we are doing the VLOOKUP on.
4. Xlookup: This is one up from Vlookup and takes into consideration the shortcomings of Vlookup. Though, Xlookup is not available everywhere but it was fascinating to learn about this command. It can take care of the challenge of:
a) The data not being in the left hand side column
b) The challenge of specifying exact match
c) Vlookup cannot handle column insertions
d) Vlookup can’t do vertical lookups
e) Vlookup has weird approximate values
f) Vlookup requires the complete reference cells
5. Sparklines: A one cell chart that can pictorially represent the data. This is easy to set up tricky to delete. In excel it is added through chart on the toolbar but in sheets it is done through the command:
data - The range or array containing the data to plot.
options - [ OPTIONAL ] - A range or array of optional settings and associated values used to customise the chart.
6. Conditional formatting: This helps to highlight on some of the data on the basis of the formatting options available with excel. Conditional formatting is very easy to use with excel but no so much with sheets. One pro tip I found was to define named ranges that will make it easier to select and apply the formatting. In sheets, we can apply multiple rules and use filters to make more sense of data.
Understanding string (text) functions
- LEN: Throws the length of the text
- SUBSTITUTE: Throws a substitute value for the part defined
- FIND and SEARCH: The find is case sensitive and search is case insensitive. So you can search what you are looking for, within where and how many letters in you wish to start the search.
- MID: Pull out some text some place in whatever string you are looking for
- LEFT and RIGHT: Pulls leftmost and rightmost characters from the string
Understanding Error Trapping
- Data validation: Reducing error and reporting error is a great feature of excel. This can be done through Data Validation.
- Pull down list
- Whole number specification
- Summing with brackets
- IF function to reduce the cross footing
- Protected cells to protect the sheet and keep a designated data entering space
Attribution is not just another number on the spreadsheet but more about the overall customer journey. Marketing attribution is linked to the different platforms and an end point — conversions. We refer to conversions as actions that relate to monetary values when we talk about attribution. We then start to talk about different attribution solutions like Google attribution, Visual IQ etc.
Attribution is not just online. It was great to learn the new way offline attribution is measured and the advancements happening around POS attribution and TV advertisements.
Another key part is to maintain uniformity in using the attribution models. Though, it’s not more about the math but more about the uniformity in understanding the consumer. Furthermore, understanding the need for attribution depends on the stage of the company, skills at hand and technology investment. It is after-all understanding the touchpoints of a consumer leading to the conversion, understanding the number of times a consumer interacted with the brand before conversion.
We then dived into the attribution models and touched upon visits (last conversion where visits leads to conversion). Attribution is going to visualize what happened before that and we’re going to simply show models in a four touch vision.
- Last non direct click: This is something that Google analytics use. This model gives the credit to the source where the second last interaction happened and not the last direct click.
- Last interaction: This gives the attribution 100% to the last interaction. This is very useful to measure online-offline conversion.
However, both of them are not that great at understanding the complete journey of the consumer (awareness, behaviour part)
3. Linear Attribution: Distributing the weight to every interaction equally. However, this doesn’t help us as this is always equal for every interaction.
4. Time decay: This model has a lot of value for many organisations. This reduces the weight allotted to an interaction depending on the time when that interaction happened to when the conversion happened.
5. Position based: Here the first and last get the major value and the minor value is distributed to the middle interactions. This is one of the most helpful one as it actually gives us the understanding of the challenges of acquisition and the challenges of conversion.
6. Custom Attribution: This is the model where every touchpoint can be set to have a different value. However, this is very dangerous as it gets the marketer to decide what channel has what value. Hence, there are a lot of machine learning being used to decide this.
The way we want to do things moving forward and the way the industry is going, is using an algorithm or machine learning to define what we’re giving value to within that journey.
We then moved to tactical understanding against every channel
- CRO: Conversion rate optimisation typically focuses on improving the conversion rate on a singular interaction. Attribution in CRO focusses on understanding not just the conversion from that singular interaction but the overall interaction leading to the group of people converting.
- PPC: Attribution is powerful in PPC. Generally, the Google Ads attribution is based on last click or last non direct click models. Any change in the attribution model will impact the google ads account. We also need to understand that brand ppc is different than the generic ppc when we track attribution. While the former has short lead time to conversion the latter has a higher time and touch points. Therefore, to attribute the value the interactions need to be weighted accordingly.
- SEO: From an attribution perspective, we look at SEO from a customer journey. We can understand the terminology or the mood of the user looking at the search terms when they landed on a particular page. SEO is a broad channel, multitude of different search terms and every term will have different attribution. We need to map these keywords to the landing pages and understanding which landing pages link to which stage in the consumer journey.
- Display: Display works differently than any other source. There is a lot of challenge in determining the value to the source and getting the correct attribution status. The model that works in display is the post attribution model rather than the click based model. We also need to understand the campaign, the impression and the time of that impression. We can then map the user level impression to the user level journeys and attribute the impression before a visit and conversion. With the information on clicks and impressions we can then attribute the value of display.
- Affiliates: Attribution is really important for affiliates. Affiliates do take all the credit for the conversion but as marketers, I learnt that we need to have a better understanding of the customer journey, type, behaviour and understanding of the affiliate within the journey. With multiple affiliates out there, we need to map the customer journey and map the value of every interaction within the journey which might be at the awareness stage or the final conversion stage.
- Email: Understand the consumer to determine the emails to be sent to a user. There are multiple things that can be done to relay the information of an action within the email to the attribution model.
- TV: TV data is not in individual form but in aggregate. The TV attribution can be related with spot timing and the impact a tv campaign can have on SEO, PPC, Content and Email during that TV campaign. Attribute also helps us to understand the value of a such a campaign not just in that moment but till a few days, weeks after that campaign.
- Direct Mail: Every address of the customer should be assigned to a customer ID and all customer ID’s are linked to an attribution model. We need to load that piece of direct mail has been sent to the customer ID and if there was a smart URL that can be tracked with that customer ID. We need attribution for direct mail as it has the power to change the customer understanding of the brand and also the search terms.
Understanding the strategic shift due to attribution
- Customer journey analysis — We need to look at the consumer journey from the point of introducing the next best step. This will only happen once we understand the consumer journey and compare the conversion with yet to be converted consumers. Attribution will be able to showcase which action is more valuable.
- Value of Brand — There are 4 sections to this: Brand awareness and propensity, Product awareness and propensity. Brand awareness is for new brands, propensity is the inclination of an individual to buy your brand. In the product realm, we are trying to push that our product is better than the other products out there.
- Life time values — LTV is not an average. We want to understand the value of the consumer throughout the time he/she has been engaged to our product. The basic concept here is to understand that the cost to acquire a new customer is always high (10x) high then to re-acquire someone. Furthermore, the value brought in by the reacquired customer is 10% higher.
- Customer Data — Though we look at an aggregate value in attribution but the best part about this is that we can drill down to an individual level. This allows us to look at some of the outliers from the usual customer journey that can even help in fraud detection or realigning our efforts to get those outliers to convert.
- Value of the user — This can be useful to lead scoring and propensity scoring. On every individual score the attribution model can pass a propensity score to that user. This can be used by multiple tools like bid management, CRO tools or lead scoring systems. We can then overlay this to the people who have not converted. We can then understand the propensity to convert with all the users who will not convert and tailor your outreach and marketing activities accordingly. So attribution at an individual value of the user is really important and valuable.
These modules followed a short packed content approach which I really liked as compared to long 50–55min sessions. This made sure that the attention doesn’t dwindles and coupled with the amazing trainers it was a treat to learn from them. Attribution was an eye opener in terms of the depth that it touched upon.