15

I'm playing with the New Data API for Amazon Aurora Serverless

Is it possible to get the table column names in the response?

If for example I run the following query in a user table with the columns id, first_name, last_name, email, phone:

const sqlStatement = `
    SELECT *
    FROM user
    WHERE id = :id 
`;
const params = {
    secretArn: <mySecretArn>,
    resourceArn: <myResourceArn>,
    database: <myDatabase>,
    sql: sqlStatement,
    parameters: [
        {
            name: "id",
            value: {
                "stringValue": 1
            }
        }
    ]
};
let res = await this.RDS.executeStatement(params)
console.log(res);

I'm getting a response like this one, So I need to guess which column corresponds with each value:

{
    "numberOfRecordsUpdated": 0,
    "records": [
        [
            {
                "longValue": 1
            },
            {
                "stringValue": "Nicolas"
            },
            {
                "stringValue": "Perez"
            },
            {
                "stringValue": "example@example.com"
            },
            {
                "isNull": true
            }
        ]
    ]
}

I would like to have a response like this one:

{
    id: 1,
    first_name: "Nicolas",
    last_name: "Perez",
    email: "example@example.com",
    phone: null
}

update1

I have found an npm module that wrap Aurora Serverless Data API and simplify the development

jtoberon
  • 7,505
  • 1
  • 31
  • 44
niqui
  • 1,162
  • 1
  • 13
  • 24

4 Answers4

6

We decided to take the current approach because we were trying to cut down on the response size and including column information with each record was redundant.

You can explicitly choose to include column metadata in the result. See the parameter: "includeResultMetadata".

https://docs.aws.amazon.com/rdsdataservice/latest/APIReference/API_ExecuteStatement.html#API_ExecuteStatement_RequestSyntax

AravindR
  • 531
  • 3
  • 6
  • 11
    This doesn't make much sense. How likely is a consumer of the API *not* going to need a column name for returned data? While there are use cases which won't require a column name -- I think you're optimizing for an unlikely scenario. – Shaun Aug 27 '19 at 15:53
  • 1
    Will further add... to get column name, includeResultMetadata must be enabled. This blows up the return size of each request 20x -- when in reality, inclusion of a column name is likely sufficient for the majority of scenarios. – Shaun Aug 27 '19 at 15:55
  • 5
    If we "includeResultMetadata" and loop through each record value, will the index of the value always match the index of ColumnMetadata? I'm struggling through this same problem with the Data API and Golang, and was surprised there wasn't a more simple way to bind key->values to structs. Takes a lot more work. – kue Aug 29 '19 at 15:12
  • 4
    I have to agree with previous commenters that not including the column names is extremely inconvenient. How are we supposed to map the response? Are we to just assume the indices of returned values will always be in the same order? Where is the documentation for this? Also I can't see there being much of a benefit for excluding the names as you are still including the column type for every value returned e.g. 'stringValue'. Seems like it would have made more sense to include column names in the response, and column types in the ResultMetadata. Can we at least get a config flag for names? – Seanimus Nov 21 '19 at 01:07
  • 3
    The entire rdsdataapi looks like it was put together in a 24 hour hackathon or something. Amazon wants to sell it as THE standard of serverless mysql server but its api makes it look like it can't even scale beyond testing let alone be used in a production application. Every thing acts as a road block, very dissapointed with the lack of support for aurora. – Munib Dec 03 '19 at 22:46
  • There are more options than just "include no column metadata" and "include column metadata with every record". How about include one array of column names in the response, and guarantee that the order of each record matches the order of the column names? – MDe May 19 '20 at 17:35
  • has this been solved yet? I am using rds data api and it is very inconvenient to parse the metadata just to get the column names and the matching values... – MoneyBall May 23 '21 at 14:17
5

Agree with the consensus here that there should be an out of the box way to do this from the data service API. Because there is not, here's a JavaScript function that will parse the response.

const parseDataServiceResponse = res => {
    let columns = res.columnMetadata.map(c => c.name);
    let data = res.records.map(r => {
        let obj = {};
        r.map((v, i) => {
            obj[columns[i]] = Object.values(v)[0]
        });
        return obj
    })
    return data
}
C. Slack
  • 71
  • 1
  • 3
1

I understand the pain but it looks like this is reasonable based on the fact that select statement can join multiple tables and duplicated column names may exist.

Similar to the answer above from @C.Slack but I used a combination of map and reduce to parse response from Aurora Postgres.

// declarative column names in array
const columns = ['a.id', 'u.id', 'u.username', 'g.id', 'g.name'];

// execute sql statement
const params = {
  database: AWS_PROVIDER_STAGE,
  resourceArn: AWS_DATABASE_CLUSTER,
  secretArn: AWS_SECRET_STORE_ARN,
  // includeResultMetadata: true,
  sql: `
    SELECT ${columns.join()} FROM accounts a 
    FULL OUTER JOIN users u ON u.id = a.user_id
    FULL OUTER JOIN groups g ON g.id = a.group_id
    WHERE u.username=:username;
  `,
  parameters: [
    {
      name: 'username',
      value: {
        stringValue: 'rick.cha',
      },
    },
  ],
};
const rds = new AWS.RDSDataService();
const response = await rds.executeStatement(params).promise();

// parse response into json array
const data = response.records.map((record) => {
  return record.reduce((prev, val, index) => {
    return { ...prev, [columns[index]]: Object.values(val)[0] };
  }, {});
});

Hope this code snippet helps someone.

And here is the response

[
  {
    'a.id': '8bfc547c-3c42-4203-aa2a-d0ee35996e60',
    'u.id': '01129aaf-736a-4e86-93a9-0ab3e08b3d11',
    'u.username': 'rick.cha',
    'g.id': 'ff6ebd78-a1cf-452c-91e0-ed5d0aaaa624',
    'g.name': 'valentree',
  },
  {
    'a.id': '983f2919-1b52-4544-9f58-c3de61925647',
    'u.id': '01129aaf-736a-4e86-93a9-0ab3e08b3d11',
    'u.username': 'rick.cha',
    'g.id': '2f1858b4-1468-447f-ba94-330de76de5d1',
    'g.name': 'ensightful',
  },
]
R.Cha
  • 843
  • 8
  • 13
0

I've added to the great answer already provided by C. Slack to deal with AWS handling empty nullable character fields by giving the response { "isNull": true } in the JSON.

Here's my function to handle this by returning an empty string value - this is what I would expect anyway.

const parseRDSdata = (input) => {
let columns = input.columnMetadata.map(c => { return { name: c.name, typeName: c.typeName}; });

let parsedData = input.records.map(row => {
        let response = {};

        row.map((v, i) => {
                //test the typeName in the column metadata, and also the keyName in the values - we need to cater for a return value of { "isNull": true } - pflangan
                if ((columns[i].typeName == 'VARCHAR' || columns[i].typeName == 'CHAR') && Object.keys(v)[0] == 'isNull' && Object.values(v)[0] == true)
                   response[columns[i].name] = '';
                else
                  response[columns[i].name] = Object.values(v)[0];
            }
        );
        return response;
    }
);
return parsedData;

}

pflangan
  • 11
  • 1