Skip to main content

Guide to Automated CSV Imports into Feedier

Automate your data import into Feedier using file uploads

L
Written by Léa Leclercq
Updated this week

Purpose: This page explains how to set up automatic CSV imports of feedback into Feedier without requiring initial technical intervention. It covers the end-to-end process, prerequisites, SFTP options, file format, mapping, scheduling, preprocessing, uniqueness (de-duplication), limitations, and responsibilities.

Who should use this guide

Clients who need to regularly export feedback data (CSV) and want an automated ingestion into Feedier.

Quick start — end-to-end in 7 steps

  1. Choose source: BigQuery or SFTP. For SFTP, decide Option 1 (your SFTP) or Option 2 (Feedier-provided).

  2. Share access details (and, for Feedier SFTP, your IP and SSH public key). Feedier will create in/(survey_name) and out folders.

  3. CSV file requirements : If the data of the file needs to be reprocessed before being imported, follow the reprocessing steps. Respect the Uniqueness constraint.

  4. Prepare CSV meeting header rules and upload a representative sample to in.

  5. Run the first manual import on the Sources page. Save and share the import UUID with the CS team.

  6. Confirm schedule, delimiter, file naming, uniqueness key, and any preprocessing needs.

  7. Feedier enables automation. Monitor the first automated runs; contact support if anomalies appear.


Supported ingestion options (choose one)

Recommendation: If you already use Google Cloud Platform, upload your feedback to BigQuery and activate the BigQuery import integration for a simpler, zero-maintenance setup. If not, proceed with SFTP as described below.

  • BigQuery (preferred for GCP users): Push data to a designated table; Feedier’s BigQuery integration pulls automatically.

  • SFTP (current standard repository): Drop CSV files into an SFTP inbox from which Feedier retrieves and imports them.


SFTP setup options

You will upload your CSV files to an SFTP with two repositories: in and out. Upload files to IN. Processed or result artifacts are written to out if configured.

Option 1 — Use your own SFTP (you grant Feedier access)

  1. Confirm the SFTP host, username, and port.

  2. Create a dedicated user and home directory with subfolders in and out.

  3. Share with Feedier CS team: host, port, username, and the allowed source IP range or firewall rules. Feedier will provide you the SSH public key for the SSH key authentication.

Option 2 — Feedier provides SFTP access

Provide the following to Feedier so we can create your dedicated SFTP access:

  • The public IP address from which you will upload (for allowlisting).

  • The SSH public key of the server or machine that will access SFTP.

Feedier will return your SFTP endpoint and credentials. Your SFTP structure will include in and out directories.

The same folder structure created in the in directory must be replicated in the out directory.

For example, if three subdirectories are created in in:

  • in/survey_A

  • in/survey_B

  • in/survey_C

The same structure must exist in out:

  • out/survey_A

  • out/survey_B

  • out/survey_C

Each processed file from the in directory will be transferred to its corresponding directory in the out directory after the execution of the import integration.


CSV file requirements

  • File type: CSV file with a consistent delimiter (comma by default; semicolon also supported if specified).

  • Header row is mandatory. Headers must be lowercase, with underscores instead of spaces. Example: zip_code, ticket_id, status, created_at.

  • No spaces in header names. Avoid special characters and uppercase letters. Keep names stable over time to preserve mappings.

  • Encoding: UTF-8 recommended. Quote values that contain delimiters or line breaks. Escape embedded quotes using double quotes.

  • Date/time fields: keep a consistent, ISO-like format per column. If locale-specific formats are used, communicate them during first import to ensure correct parsing.

🔎 Preprocessing (optional)

If the files produced by your systems differ from the CSV used for the first import (headers, delimiter, formatting), Feedier can configure preprocessing to normalize data before applying the mapping. This is usually identified by the project lead (Client side) and the CS Implementation lead before setting up the automation.

  • Provide two samples to the CS Implementation lead : the original exported file and the desired, import-ready file (we recommend you to confirm the final format to your CS lead, to make sure the format and attributes respect the use case and the import format of the platform). The support team will then set up a preprocessing workflow accordingly.

  • Once validated, you can keep sending your original format; the workflow will transform it automatically.

Uniqueness constraint (de-duplication)

To avoid importing the same feedback multiple times (e.g., a ticket that evolves from in_progress to closed and appears in two files from different weeks), enable a uniqueness rule.

  1. Tell the CS team that you want the uniqueness constraint activated.

  2. Specify the attribute used as the unique key (e.g., ticket_id or email). The workflow will use this to replace the prior record.

⚠️ Choose a key that is stable over time and present in every row.


First import (one-time manual mapping)

To enable import automation, perform a first import manually on the Feedier Sources page using a representative CSV file. This creates a mapping profile that automation will reuse.

  1. Prepare your CSV according to the requirements above.

  2. Go to the Sources page in Feedier and run a CSV import. Map columns to the desired Feedier attributes and questions and then save.

  3. Copy the import UUID displayed on the imports page, it will be used in the next step.

Screenshot 2026-02-04 at 16.37.17.png


Automating your import

An automated workflow that periodically reads your SFTP source, applies the saved mapping, and ingests new records needs to be cretaed

  • Frequency: daily, weekly, or monthly.

  • File strategy: full snapshot vs incremental. For full snapshots, uniqueness rules should be applied to avoid duplicates; for incremental files, provide a stable unique key.

Communicate your preferred schedule and where the file will appear, using a date-stamped naming convention (e.g., in/survey_a/data_yyyy_mm_dd.csv).


Steps to automate your import

  1. Go to Autopilot in Feedier.

Screenshot 2026-02-04 at 16.08.55-20260204-150901.png

2. Create a new workflow.

Screenshot 2026-02-04 at 16.09.21-20260204-150925.png

3. Choose the recurrence of your import (cron) in the recurrence node. For example: if your system drops the CSV every Monday at 09:00, schedule the run every Monday at 11:00 to ensure the file is available.

Screenshot 2026-02-04 at 16.09.24-20260204-150930.png

4. Add a Webhook node.

Screenshot 2026-02-04 at 16.09.37-20260204-150942.png

6. This is an example of the JSON payload:

{
"api_key": "YOUR_ORG_API_KEY",
"source_type": "SFTP",
"uuid": "ab1b0543-d59a-4fa3-8792-d895f5c4ab70",
"sftp_configuration": {
"host": "54.246.123.185",
"port": "22",
"username": "CLIENT_NAME",
"ssh_key": "default",
"file_path": "feedier_connect/CLIENT_NAME/in/survey_A"
},
"preprocessing": "false",
"delimiter": ";",
"unicity": "false",
"unicity_attribute_name": "attribute_name"
}


JSON parameters reference

Parameter

Description

Example / Allowed

api_key

Client private key

"qsdsqdsqqsfqsfqsfazfasqf"

The source from which we will fetch the CSV that the client sent. At this moment, we only support SFTP (internal or external)

SFTP

uuid

The UUID from the imports page for a prior import with the exact same CSV structure. The mapping will be reused.

ab1b0543-d59a-4fa3-8792-d895f5c4ab70

sftp_configuration.host

The host of the SFTP server. For Feedier-hosted SFTP, use the configuration communicated by the Feedier team to access the SFTP server.

54.246.123.185

sftp_configuration.port

The port used for the SFTP connection. For Feedier-hosted SFTP, use the configuration communicated by the Feedier team to access the SFTP server.

22

sftp_configuration.username

The username for the SFTP connection.

feedier_connect

sftp_configuration.private_key

The private key used for authenticating the SFTP connection. This should remain default unless you are using your own SFTP server; in that case, use your own SFTP private key provided or configured for that server.

default or

private_key_content

preprocessing

Indicates whether the CSV needs preprocessing. If the CSV in SFTP matches the one used to generate the UUID, set to false.

true/false

delimiter

The delimiter used in the CSV file.

;

preprocess_workflow

The workflow executed to preprocess the CSV before generating the UUID and importing. Provided as part of the payload when needed.

Provided in payload

unicity

Whether to apply uniqueness rules (avoid importing the same record twice; replace older with newer).

true/false

unicity_attribute_name

The attribute used to check whether feedback already exists. Should match an attribute name already used on the platform.

Must match existing attribute; e.g., ticket_id


Operational checklist

Item

What you provide or confirm

Owner

Ingestion method

Confirm BigQuery vs SFTP. If BigQuery, confirm dataset.table. If SFTP, confirm Option 1 (your SFTP) or Option 2 (Feedier-provided).

Client

Access details

Provide host/port/username and allowlist rules (your SFTP), or your IP and SSH public key (Feedier SFTP).

Client

Folder structure

Create in/(survey_name) subfolders and upload a sample CSV for the first import.

Client

First import mapping

Run a manual CSV import on the Sources page and share the import UUID with the CS team.

Client

Preprocessing need

If your system’s raw CSV differs from the mapping CSV, share both original and desired files to configure normalization.

Feeder team

Uniqueness rule

Confirm the unique key attribute (e.g., ticket_id) to avoid duplicates and enable upsert behavior.

Client

Schedule

Choose daily/weekly/monthly cadence and confirm file naming convention/location.

Client → Feedier setup


Limitations and best practices

  • Stable schema: Keep headers stable. If you must change them, notify CS to update mapping/preprocessing.

  • Consistent delimiters: Do not mix commas and semicolons across files for the same flow; communicate the delimiter once.

  • One file per schedule: Prefer one consolidated file per cadence and survey. If multiple files exist, use clear, non-overlapping naming patterns.

  • Don’t send the entire Source dataset each time (e.g., your full ticket database or transcripts database). For better performance, include only new records in each file.


Security and access

- Use SSH key-based access for SFTP.


Example CSV header and rows

Headers (lowercase, underscores): ticket_id,email,status,created_at,comment,rating
Example rows:
12345,[email protected],in_progress,2025-08-08,"Customer reported intermittent issue",4
12345,[email protected],closed,2025-08-10,"Issue resolved after patch",5

With uniqueness on ticket_id, the second row will supersede the earlier feedback for the same ticket, preventing duplicates.


FAQs

Can we automate imports without the first manual import?

We require a first manual import to capture the correct mapping (via its UUID). This minimizes surprises and ensures your automated flow matches your intended attributes.

What if our CSV delimiter is ';' instead of ','?

Supported. Tell the CS team your delimiter during setup so the workflow parses it correctly.

How do we avoid duplicates across weekly snapshots?

Activate the uniqueness rule and specify a stable key (e.g., ticket_id). The workflow will upsert based on the most recent record.

Can we send multiple surveys from one SFTP?

Yes. Create one subfolder per survey under IN and keep consistent file names to simplify routing.

We changed a header name. What now?

Notify the CS team. We will either update the mapping or adjust preprocessing to align the new header with your saved import configuration.

Did this answer your question?