Documentation
¶
Index ¶
- type DataFlowTester
- func (t *DataFlowTester) CreateSnapshot(dst schema.Tabler, csvRelPath string, pkfields []string, targetfields []string)
- func (t *DataFlowTester) FlushRawTable(rawTableName string)
- func (t *DataFlowTester) FlushTabler(dst schema.Tabler)
- func (t *DataFlowTester) ImportCsvIntoRawTable(csvRelPath string, tableName string)
- func (t *DataFlowTester) Subtask(subtaskMeta core.SubTaskMeta, taskData interface{})
- func (t *DataFlowTester) VerifyTable(dst schema.Tabler, csvRelPath string, pkfields []string, targetfields []string)
Examples ¶
Constants ¶
This section is empty.
Variables ¶
This section is empty.
Functions ¶
This section is empty.
Types ¶
type DataFlowTester ¶
type DataFlowTester struct {
Cfg *viper.Viper
Db *gorm.DB
Dal dal.Dal
T *testing.T
Name string
Plugin core.PluginMeta
Log core.Logger
}
- Create a folder under your plugin root folder. i.e. `plugins/gitlab/e2e/ to host all your e2e-tests`
- Create a folder named `tables` to hold all data in `csv` format
- Create e2e test-cases to cover all possible data-flow routes
Example code:
See [Gitlab Project Data Flow Test](plugins/gitlab/e2e/project_test.go) for detail
DataFlowTester use `N`
Example ¶
var t *testing.T // stub
var gitlab core.PluginMeta
dataflowTester := NewDataFlowTester(t, "gitlab", gitlab)
taskData := &tasks.GitlabTaskData{
Options: &tasks.GitlabOptions{
ProjectId: 3472737,
},
}
// import raw data table
dataflowTester.ImportCsv("./tables/_raw_gitlab_api_projects.csv", "_raw_gitlab_api_project")
// verify extraction
dataflowTester.FlushTable("_tool_gitlab_projects")
dataflowTester.Subtask(tasks.ExtractProjectMeta, taskData)
dataflowTester.VerifyTable(
"_tool_gitlab_projects",
"tables/_tool_gitlab_projects.csv",
[]string{"gitlab_id"},
[]string{
"name",
"description",
"default_branch",
"path_with_namespace",
"web_url",
"creator_id",
"visibility",
"open_issues_count",
"star_count",
"forked_from_project_id",
"forked_from_project_web_url",
"created_date",
"updated_date",
"_raw_data_params",
"_raw_data_table",
"_raw_data_id",
"_raw_data_remark",
},
)
func NewDataFlowTester ¶
func NewDataFlowTester(t *testing.T, pluginName string, pluginMeta core.PluginMeta) *DataFlowTester
NewDataFlowTester create a *DataFlowTester to help developer test their subtasks data flow
func (*DataFlowTester) CreateSnapshot ¶
func (t *DataFlowTester) CreateSnapshot(dst schema.Tabler, csvRelPath string, pkfields []string, targetfields []string)
CreateSnapshot reads rows from database and write them into .csv file.
func (*DataFlowTester) FlushRawTable ¶
func (t *DataFlowTester) FlushRawTable(rawTableName string)
MigrateRawTableAndFlush migrate table and deletes all records from specified table
func (*DataFlowTester) FlushTabler ¶
func (t *DataFlowTester) FlushTabler(dst schema.Tabler)
FlushTabler migrate table and deletes all records from specified table
func (*DataFlowTester) ImportCsvIntoRawTable ¶
func (t *DataFlowTester) ImportCsvIntoRawTable(csvRelPath string, tableName string)
ImportCsvIntoRawTable imports records from specified csv file into target table, note that existing data would be deleted first.
func (*DataFlowTester) Subtask ¶
func (t *DataFlowTester) Subtask(subtaskMeta core.SubTaskMeta, taskData interface{})
Subtask executes specified subtasks
func (*DataFlowTester) VerifyTable ¶
func (t *DataFlowTester) VerifyTable(dst schema.Tabler, csvRelPath string, pkfields []string, targetfields []string)
VerifyTable reads rows from csv file and compare with records from database one by one. You must specified the Primary Key Fields with `pkfields` so DataFlowTester could select the exact record from database, as well as which fields to compare with by specifying `targetfields` parameter.