First steps
User Data
- Responsive Email Editor Review
- Designing your email
- Creating Synchronized Modules
- Setting Up Responsive Email Design
- Setting Up Smart Containers
- Creating a Gmail Promotions Annotation
- Adding The Rollover Effect
- Adding Anchor Links
- Module Library
- Adding a Table to an Email
- Adding Custom Fonts
- Creating CTA Button
- Working with Images
- Creating Timer
- Using AI in the Email Editor
Omnichannel
- Setting Up Widgets for Your Site
- Widget Calling
- Setting Up Locations for the Widget Calling Rules
- Storing data from widgets to contact fields
- Using Annoyance Safeguard
- Actions After Form Submission
- Replacing Double Opt-In System Workflow
- Using Yespo Extension for Google Chrome
- Creating Pop-ups via Google Tag Manager or WordPress
- Sending Yespo Widget Events to Google Analytics
- Using A/B Tests for Widgets
Automation
- Building and Editing Workflows
- Configuring Workflow Start/Stop Conditions
- Start Block
- Popular Blocks
- Message Blocks
- Contact Blocks
- Conditions Blocks
- Other Blocks
- Message to Segment Blocks
- Time Blocks
- Setting Up Allowed Send Time
- Using Workflow Launch History
- Webhook Workflows
- Workflow Troubleshooting
- Double Opt-In
- Welcome Сampaign
- Welcome Series Segmented by Category
- Launching a Workflow After a Contact Import
- Regular Workflow for a Segment
- Birthday Campaign
- Linking Workflow to the Button
- Using Variables from Order in Workflow
- Collecting Order Feedback
- Customer Reactivation
- Sending Extra Campaigns
- Sending Campaign to Those Who Did Not Open the Previous One
Personalization
Analytics
- Email Campaign Report
- Web Push Campaign Report
- Viber Campaign Report
- Mobile Push Campaign Report
- App Inbox Campaign Report
- In-App Report
- Widget Report
- Triggered Campaign Report
- AMP Campaign Report
- SMS Campaign Report
- Multilingual Campaign Report
- Setting up UTM Tags
- Revenue from Campaigns
- Tracking Campaign Performance in Google Analytics 4
- Message Analytics
Multilanguage Campaigns
Events and Behaviour Tracking
Recommendations
API
Security and Compliance
Integration
A/B Testing of Recommendation Blocks
A/B tests are used to determine the efficacy of various recommendation elements: title, appearance, placement, or data source.
In the A/B testing tab, you can start a new test and later view its results.
To run a new test, click Start A/B testing. You’ll be directed to Parameters to specify test conditions.
You can also start testing in Parameters by clicking the Start A/B testing button.
Parameters
First, specify the number of variants to be tested, distribute weight, and enter a description.
Weight is the share of the audience that will see the corresponding variant. You can distribute it evenly by clicking the self-titled button, or manually specify the ratio by hovering over the digit in the line. Recommendations with zero weight won’t be displayed on the site.
By default, two variants are set for A/B testing. This is the required minimum. To add a new one, click Add variant. You can test up to 8 variants.
Next, specify the parameter to be tested. We recommend testing one parameter at a time.
Placement. Click the arrow at the end of the title to choose the variant from the available placements.
Appearance. Similarly to placements, click the arrow at the end of the title to choose the variant from the available appearances.
Title. Enter the title variants.
Data source. Click the corresponding data source to replace it with the variant to be tested.
Click Save after all settings are done.
If you don't run any A/B test, the Parameters tab displays only the parameters of the recommendation.
A/B Testing Report
Each recommendation that has been A/B tested is labeled with the corresponding button in the general list. Hover over it, to preview the results in brief.
Depending on the current status (complete/incomplete) and results for two tested indicators (CTR, Conversion), the button has different colors:
- Green: testing is complete, both winners (CTR, Conversion) are determined.
- Grey and yellow: there is not enough data to determine the winner for CTR, the results for Conversion are the same.
- Yellow: the results for CTR and for Conversion are the same, testing is in progress.
- Green and yellow: the winner for CTR is determined, the results for Conversion are the same.
- Yellow and green: the results for CTA are the same, the winner for Conversion is determined.
- Grey: there is not enough data to determine the winner for any indicator.
There are several ways to open the detailed report:
- Click the corresponding recommendation in the general list and go to A/B testing.
- Click A/B testing in the general list;
- Click View test in the preview.
First, the report shows the winner for both indicators. To become a winner, a variant must perform better with a confidence of at least 90%, and the confidence interval between variants must be over 5%. Testing will continue until this minimum threshold is reached.
If there is not enough data, you can finish the test by clicking Choose Varian A and finish test (the variant that has been performing better will be in bold).
If different winners variants perform better for CTR and Conversion, the winner will be the variant that performs better for Conversion, as sales are more important than clicks.
Results show indicators for each variant. Indicators for the winner are highlighted green. Hover any to see the confidence interval – a metric that determines the accuracy of testing. The more people saw the recommendations and performed the target action, the higher the interval and the lower the standard error. If % of the standard error is high, wait until enough data is collected.
Activity dynamics visualizes the results in the form of histograms. Hover over the graph to see details.
At the bottom, there is a history of completed tests with the date of testing and the winner.