Questionnairesopenbve Data Publishing Studio



Data is extracted by using the REST API. This interface is installed and implemented with the data source application and works independently of Rational Publishing Engine.For more information about the REST API for a data source, see the documentation for that data source. You can share Data Studio reports on social media platforms like Twitter, LinkedIn, Facebook, or Slack, or on chat applications like Hangouts, and iMessage. Just make sure others can view the report, then copy and paste the report link into the app. Learn more about sharing on social media.

  1. Questionnairesopenbve Data Publishing Studio App
  2. Questionnairesopenbve Data Publishing Studio Software
  3. Questionnairesopenbve Data Publishing Studio Design
  4. Questionnairesopenbve Data Publishing Studio Tutorial

Be sure to configure your RStudio Connect account before attemptingto publish with the RStudio IDE. See Section 2for information on configuring your Connect account. If you donot have at least the Publisher role within Connect you can requestpermission to publish your own content via the link under “Info”on the “Content” page.


3.1 General Publishing Instructions

RStudio Connect accepts publishing Shiny applications, R Markdown documents,plots, graphs, websites, Tensorflow models, and APIs. The blue publishingicon in the RStudio IDE indicates built-in support for publishing this pieceof content.


You can find the blue publishing icon at thefollowing locations:

  • The upper right of the file editor
  • The document viewer when viewing a document
  • The embedded viewer when running a Shiny application
  • The plots pane

Click on this icon to open a publishing dialog where you can name yourcontent and select additional files to include in the deployment. Bydefault, RStudio will try to infer the data files and scripts are usedin your content. This window lets you refine those fileselections.


Most of the time, RStudio is able to determine automatically whichfiles are needed to render your document on RStudio Connect. However, thereare situations in which it will miss a file (for instance, if itisn’t referenced directly in your document). The Add More…button lets youadd files to the bundle that will be sent to RStudio Connect so that theywill be available on the server when your document is rendered.You can also use the resource_files field in your document’s YAMLheader to add additional files.

Deployed data files must be in the same directory as your Shinyapplication or R Markdown document. Files stored elsewhere on your computer(C:Usersmemydata.csv) will not be available on the remote server.

Click Publish after verifying your settings.

Your first deployment may take a few minutes, as RStudio Connect attempts torecreate the R library you use locally – referenced packages aredownloaded and installed. These packages are cached on the server;subsequent deployments will be faster.

Not all of your IDE environment can be replicated on the server.Different operating systems or versions of R can occasionally makecontent behave differently. Package installation failures mayrequire the installation of additional system libraries on RStudio Connect.

When the deployment completes, you’ll be taken to the content’ssettings page in RStudio Connect.


This page lets you verify the sharing and visibility of your content.Click Publish to confirm these settings.

Content cannot be viewed until it is published.


You should now see your deployed content – a rendered version ofthe document or a live instance of your Shiny application. The contentis displayed within the context of RStudio Connect, and you are ableto further configure settings for your content.

3.2 Collaboration

Some data products will have multiple authors and collaborators who areresponsible for managing the content deployed to RStudio Connect. The first stepto collaboration is sharing and working together on code. We recommend using aversion control tool like Git to coordinate collaboration across many users.General information about getting started with git is availableelsewhere.

The second step is collaborating on the published data product. To enablemultiple users to maintain and update a single piece of content on RStudioConnect, all users should be listed as collaborators on the content.


When the content is published to RStudio Connect for the first time, anrsconnect folder will be created in the directory that the content waspublished from.

This rsconnect folder should be added and committed into version control. Itdoes not contain any private or secure information. However, it does contain theURL for the RStudio Connect server and the content URL. This information allowsfuture publications to easily target the same endpoint.

A collaborator, then, would clone or check out the code to their developmentsystem and make whatever changes or improvements are necessary. When finished,they will click the Publish button in the RStudio IDE, which will use thersconnect folder to determine where the content should be published. Duringthe publishing process, RStudio Connect checks that the authenticated user hascollaborator access for this piece of content.


If the publisher wants to publish to a new location, this option is surfaced inthe RStudio IDE as well. This will create a second deployment location onRStudio Connect and will leave the original content deployment unmodified. Ifyou want to surface a single URL for your users despite publishing to a newlocation, keep in mind that you can assign a vanity URL to the originaldeployment location, then later assign it to a different piece of content on theserver.

Questionnairesopenbve data publishing studio design


Keep in mind that package environments may be different on each developer’scomputer. The original author and a collaborator may be using differentcomputers, operating systems, or R versions with different package versionsinstalled. RStudio Connect will attempt to reproduce the environment of whoeveris publishing the content. Keeping developer environments in sync is not aproblem solved by RStudio Connect. Rather, thepackrat package and RStudio ServerProaddress this problem more directly.

3.3 Publishing APIs

3.3.1 Publishing Plumber APIs

Plumber APIs have the following known restrictions:

  • Push-button publishing from the IDE is not supported
  • Server-side latency is not tracked

To get started with publishing Plumber API endpoints, create adirectory with a plumber.R file defining your endpoints. From theconsole, execute the following, replacing <project-dir> with yourproject’s directory:

Questionnairesopenbve Data Publishing Studio App

Once live, the content view will show you the results of your @get /endpoint. If @get / is not defined, you will see a “Swagger UI”presentation of your API. Swagger is an API documentation format;Plumber can automatically generate a swagger.json definition foryour API by inspecting your endpoints. The “Swagger UI” page offersan interactive portal into your API that allows you to test yourendpoints directly from the browser.

To make a call to your new API yourself, you’re going to need thetarget URL. You can find it by looking at the bottom of the Accesstab in your API’s settings, or by clicking the ... menu in theupper-right of the content view and select “Open Solo”.


The location should look like the following:

All calls can be made relative to this path. For example, if you wantto make a request to /volcano, the location in the above examplewould be https://example.com/content/42/volcano, like so:

If your API restricts access to a particular set of users, then RStudio Connect willrequire that incoming requests authenticate the user making the request. Thebest approach for authenticating a request targeting an API is to use API Keys4.

Alternatively, if your server is configured to use proxy authentication, youshould ask your IT Administrator about ways to make API calls through thatproxy.

If you want to perform access control in your Plumber API itself, or if you wantto allow anyone to access your API, open the application settings, then the“Access” tab, and set “Who can view this API” to “Anyone”.


Some configurations may prohibit the “Anyone” access type.

A simple way to control access might be like the following:

See the Plumber documentation for more information on @filter methods,and how they can be skipped in your route with @preempt.

Security is hard. The example above might be good enough for some purposes, butis unsuitable in cases where you need multiple keys you can invalidatearbitrarily. In those cases, API Keys 4 would be preferable. Askyour IT Administrator for guidance if you need help choosing a suitableauthentication scheme.

3.3.2 Publishing TensorFlow Model APIs

Note: TensorFlow Saved Models up to TensorFlow version 1.9.0 are supported.To find out what version of TensorFlow is installed, you can run the followingin the R console: tensorflow::tf_version().

If your installed TensorFlow version is greater than 1.9.0, you can installTensorFlow 1.9.0 by running the following in the R console:tensorflow::install_tensorflow(version = '1.9.0')'.

TensorFlow Model APIs are easy to deploy to RStudio Connect. Export your model:

Then deploy it to RStudio Connect using the rsconnect package:

The home page on your new TensorFlow Model API will explain how it can be used.Much like Plumber, you can use RStudio Connect access controls and RStudio ConnectAPI keys to secure your Model API, or to allow everyone to use it.

All TensorFlow Model API requests are mostly the same. For the following examples,assume that your TensorFlow Model API is running open to the public at:https://localhost:3939/content/12. Assume also that your TensorFlow model acceptsas input a 2-tensor (matrix) of floating point values with dimensions Infinity by 2.

You could call your TensorFlow Model API like so:

Note that the Tensorflow Model API is strict about the number of dimensions passedin instances. The instances array does not count towards tensor dimensions:

  • 'instances': [[[2.4]]] is one instance of a 2-tensor (matrix) with dimensions 1x1
  • 'instances': [[[[2.3, 4.5],[5.6, 7.8]]]] is one instance of a 3-tensor withdimensions 1x2x2

Your TensorFlow Model API will return the predicted values as you configured it. Forexample, if your model was configured to respond with a 0-tensor (scalar) of floats,you might get the following:

The rstudio/tfdeploy repository contains someexample scripts for building and exporting simple models, so you can try them beforeyou upload some of your own. For example:

3.4 Publishing Documents

Questionnairesopenbve Data Publishing Studio Software

When publishing documents to RStudio Connect, you may encounter otherdeployment options depending on your content. These options arediscussed below.

3.4.1 Publishing Destination


RPubs is a service foreasily sharing R Markdown documents. RPubs is not related to RStudio Connect,and you should always choose “RStudio Connect” if you wish to publishyour content to RStudio Connect.

RPubs documents are (1) always public, (2) always self-contained, and(3) and cannot contain any Shiny content. You will see the choice topublish to RPubs if your document is self-contained and does not requireShiny. Some organizations want to prohibit publishing to RPubs to reducethe chance that sensitive data will be accidentally made public;publishing to RPubs (andshinyapps.io) canbe disabled if desired using an RStudio Server option.

3.4.2 Publish Source Code


You will see these options when publishing from the document viewer.

Publishing the document with source code means that your R Markdown file(.Rmd) will be deployed to RStudio Connect. This file is rendered (usuallyto HTML) on the server.

Publishing only the finished document means that the HTML file you renderedlocally is deployed to RStudio Connect.

We recommend publishing your documents with source code, as it allows you tore-render the document with RStudio Connect (on a weekly schedule, forexample). If the document cannot be rendered by RStudio Connect because offiles or data sources that are unavailable on the server, choose “Publishfinished document only” so others can view your work.

3.4.3 Document Selection


You will see these options when publishing an R Markdown document from a directorythat contains more than one R Markdown document. It is possible to linktogether multiple R Markdown documents to make a multi-page document,so this is your chance to indicate that you’ve done this, and to publishall the documents at once. In most cases however you’ll want topublish just the current document.

For a while now, it has been possible to publish a .dacpac file (meaning apply it to an new or existing database) using the cross-platform version of sqlpackage.

But authoring and building a database project (sqlproj) was only possible on Windows, as the .sqlproj project type is based on the classic .NET Framework .csproj project type.

Now, thanks to the new Database Project extension in Azure Data Studio Insiders build, it is now possible to author, build and manually publish a SQL Server Database project.

And by using the new MsBuild.Sdk.SqlProj SDK and project type, is is also possible to build and publish a Database Project from a build agent (CI pipeline), without having to install the sqlpackage tool. Read on!

What is a Azure SQL Database project?

A database project is a Visual Studio project type, that allows you to develop, build, test and publish your database from a source controlled project, just like you develop your application code. You can start from scratch with a new Database project, or import an existing database.

The database project describes the 'desired state' of your database schema, and the output from the project is a .dacpac file (a structured .zip file), that you can use various graphical and command line tools to compare or apply ('publish') to your production databases.

The underlying DacFx API is available as a .NET Standard 2.0 library, and a command line tool, sqlpackage.

What is Azure Data Studio

Azure Data Studio is a free, cross-platform database tool for data professionals using the Microsoft family of on-premises and cloud data platforms on Windows, MacOS, and Linux.

Azure Data Studio offers a modern editor experience with IntelliSense, code snippets, source control integration, and an integrated terminal.

It is based on the VS Code editing experience, and is available as an open source project on GitHub.

Getting started with Database Projects in Azure Data Studio Insiders build

Currently, you need the Insiders build in order to try out the preview extension.

In Azure Data Studio Insiders, go to View, Extensions, and search for the 'SQL Database Projects' extension, then install it.

Then from Explorer, select Projects.

You then get three options:

  • New project

Use this to start a blank project, you can then add scripts to create your database objects (tables, indexes, stored procedures, views etc.)

  • Open project

This will allow you to open an existing .sqlproj file, even if it was originally created on Windows in Visual Studio. Once open, a few changes will be added to your .sqplroj file, but you can continue to open and work with it in Visual Studio.

  • Import Project from Database

This will create a new project, and allow you to reverse engineer database objects (tables, indexes, stored procedures, views etc.) from an existing database.

Once the database project is ready to be deployed, you can Build and Publish it via the context menu.

Build means create a .dacpac file from the scripts and settings in your project. A .dacpac file is a .zip file that conforms to a standard format.

Publish means take the resulting .dacpac file, and apply it against a new or existing Azure SQL (or SQL Server) database, to change the schema of the target database to match the desired schema described in the .dacpac file.

It is also possible to build the project from the command line using the .NET Core cross-platform SDK, but there are some rough edges currently.

There is no command line publish support, so to publish from the command line, sqlpackage must be installed, and run against the built .dacpac file. (See below for a solution to that).

Give the extension a try, and provide feedback on the GitHub repo.

Running build and publish of a Database project on a cross-platform build server

Questionnairesopenbve Data Publishing Studio Design

With help from a community project: The MsBuild.Sdk.SqlProj SDK package, it is possible to create a companion project, that allows you to run dotnet publish from the command line on your build server, without requiring sqlpackage to be installed.

You can simply run the following command to build and publish the project against a live database form your Linux or MacOS based build server:

Questionnairesopenbve Data Publishing Studio Tutorial

Read more details about using this package in my blog post here.