1. Can we connect from the jupiter notebook to: Hive, SparkSQL, Presto
EMR release 5.14.0 is the first to include JupyterHub. You can see all available applications within EMR Release 5.14.0 listed here .
2. Are there any interpreters for scala, pyspark
When you create a cluster with JupyterHub on EMR, the default Python 3 kernel for Jupyter, and the PySpark, SparkR, and Spark kernels for Sparkmagic are installed on the Docker container. You can use these kernels to run ad-hoc Spark code and interactive SQL queries using Python, R, and Scala. You can install additional kernels within the Docker container manually i.e. you can install additional kernels, additional libraries and packages and then import them for the appropriate shell .
3. Is there any option to connect from jupiter notebook via JDBC / secured JDBC connection?
The latest JDBC drivers can be found here . You will also find an example here that uses SQL Workbench/J as a SQL client to connect to a Hive cluster in EMR.
You can download and install the necessary drivers from the links available here . You can add JDBC connectors at cluster launch using the configuration classifications. An example of presto classifications and an example of configuring a cluster with the PostgreSQL JDBC can be seen here .
4. What would be steps to bootstrap cluster with jupyter notebooks
aws dedicated blog post states , aws provide a bootstrap action  to install Jupyter on the following path:
5. Any way to save the jupiter notebook on a persistent storage like s3 automatically like in zeppelin?
By default, this is not available, however, you may be able to create your own script to achieve this.
EMR enables you to run a script at any time during step processing in your cluster. You specify a step that runs a script either when you create your cluster or you can add a step if your cluster is in the WAITING state .
6. Is there a way to add HTTPS to the Jupiter notebook GUI? if so how?
By default, JupyterHub on EMR uses a self-signed certificate for SSL encryption using HTTPS. Users are prompted to trust the self-signed certificate when they connect.
You can use a trusted certificate and keys of your own. Replace the default certificate file, server.crt, and key file server.key in the /etc/jupyter/conf/ directory on the master node with certificate and key files of your own. Use the c.JupyterHub.ssl_key and c.JupyterHub.ssl_cert properties in the jupyterhub_config.py file to specify your SSL materials .
You can read more about this in the Security Settings section of the JupyterHub documentation .
7. Is there a way to work with API & CMD of jupyter?
As is the case with all AWS services, you can create an EMR cluster with JupyterHub using the AWS Management Console, AWS Command Line Interface, or the EMR API .
8. Where is the config path of jupiter nootbook ?
You can customize the configuration of JupyterHub on EMR and individual user notebooks by connecting to the cluster master node and editing configuration files .
As mentioned above, we provide a bootstrap action  to install Jupyter on the following path:
9. Any common issues with jupyter?
here are a number of considerations you need to consider:
User notebooks and files are saved to the file system on the master node. This is ephemeral storage that does not persist through cluster termination. When a cluster terminates, this data is lost if not backed up. We recommend that you schedule regular backups using cron jobs or another means suitable for your application.
In addition, configuration changes made within the container may not persist if the container restarts. We recommend that you script or otherwise automate container configuration so that you can reproduce customizations more readily .
10. Orchestration options for jupiter notebook ? i.e how to schedule a notebook to run daily
JupyterHub and related components run inside a Docker container named jupyterhub that runs the Ubuntu operating system. There are several ways for you to administer components running inside the container .
Please note that customisations you perform within the container may not persist if the container restarts. We recommend that you script or otherwise automate container configuration so that you can reproduce customisations more readily.
11. User / Group / credentials management in jupiter notebook?
You can use one of two methods for users to authenticate to JupyterHub so that they can create notebooks and, optionally, administer JupyterHub.
The easiest method is to use JupyterHub’s pluggable authentication module (PAM). However, JupyterHub on EMR also supports the LDAP Authenticator Plugin for JupyterHub for obtaining user identities from an LDAP server, such as a Microsoft Active Directory server .
You can find instructions and examples for adding users with PAM here  and LDAP here .
12. notebook collaborations features?
13. import/export options?
As stated above, you can install additional kernels within the Docker container manually i.e. you can install additional kernels, additional libraries and packages and then import them for the appropriate shell .
14. any other connections build in Jupyter?
As stated above, EMR release 5.14.0 is the first to include JupyterHub and will include all available EMR applications within EMR Release 5.14.0.
15. Working seamlessly with AWS GLUE in terms share meta store?
If you are asking for example with regards to configuring Hive to use the Glue Data Catalog as its metastore, you can indeed do this since EMR version 5.8.0 or later .
Finally, I have included the following for your reference:
1. JupyterHub Components
The following diagram depicts the components of JupyterHub on EMR with corresponding authentication methods for notebook users and the administrator .
As you are more than likely aware. AWS have recently launched a ML notebook service called SageMaker which uses Jupyter notebooks only. As Sagemaker is integrated with other AWS services you can achieve greater control. For example, with Sagemaker you can utilize the IAM service to control user access. You can also connect to it from an EMR cluster, for example EMR version 5.11.0  added the aws-sagemaker-spark-sdk component to Spark, which installs Amazon SageMaker Spark and associated dependencies for Spark integration with Amazon SageMaker. You can read more
You can use Amazon SageMaker Spark to construct Spark machine learning (ML) pipelines using Amazon SageMaker stages. If this is of interest to you, you can read more about it here  and on the SageMaker Spark Readme on GitHub .
 Amazon EMR 5.x Release Versions – https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-release-5x.html
 Installing Additional Kernels and Libraries – https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-jupyterhub-install-kernels-libs.html
 Use the Hive JDBC Driver – https://docs.aws.amazon.com/emr/latest/ReleaseGuide/HiveJDBCDriver.html
 Use Business Intelligence Tools with Amazon EMR – https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-bi-tools.html
 Adding Database Connectors – https://docs.aws.amazon.com/emr/latest/ReleaseGuide/presto-adding-db-connectors.html
 Run Jupyter Notebook and JupyterHub on Amazon EMR – https://aws.amazon.com/blogs/big-data/running-jupyter-notebook-and-jupyterhub-on-amazon-emr/
 Create Bootstrap Actions to Install Additional Software – https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-bootstrap.html
 Run a Script in a Cluster – https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hadoop-script.html
 Connecting to the Master Node and Notebook Servers – https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-jupyterhub-connect.html
 JupyterHub Security Settings – http://jupyterhub.readthedocs.io/en/latest/getting-started/security-basics.html
 Create a Cluster With JupyterHub – https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-jupyterhub-launch.html
 Configuring JupyterHub – https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-jupyterhub-configure.html
 Considerations When Using JupyterHub on Amazon EMR – https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-jupyterhub-considerations.html
 JupyterHub Configuration and Administration – https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-jupyterhub-administer.html
 Adding Jupyter Notebook Users and Administrators – https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-jupyterhub-user-access.html
 Using PAM Authentication – https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-jupyterhub-pam-users.html
 Using LDAP Authentication – https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-jupyterhub-ldap-users.html
 Using the AWS Glue Data Catalog as the Metastore for Hive – https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hive-metastore-glue.html
 JupyterHub – https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-jupyterhub.html
 EMR Release 5.11.0 – https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-whatsnew-history.html#emr-5110-whatsnew
 Using Apache Spark with Amazon SageMaker – https://docs.aws.amazon.com/sagemaker/latest/dg/apache-spark.html
 SageMaker Spark – https://github.com/aws/sagemaker-spark/blob/master/README.md
[*] What Is Amazon SageMaker? – https://docs.aws.amazon.com/sagemaker/latest/dg/whatis.html
Need to learn more about aws big data (demystified)?
- Contact me via linked in Omid Vahdaty
- website: https://amazon-aws-big-data-demystified.ninja/
- Join our meetup, FB group and youtube channel
- Join our meetup : https://www.meetup.com/AWS-Big-Data-Demystified/
- Join our facebook group https://www.facebook.com/groups/amazon.aws.big.data.demystified/
- subscribe to our youtube channel https://www.youtube.com/channel/UCzeGqhZIWU-hIDczWa8GtgQ?view_as=subscriber