2017-12-26T19:34:07Z

The Flask Mega-Tutorial Part IV: Database

This is the fourth installment of the Flask Mega-Tutorial series, in which I'm going to tell you how to work with databases.

For your reference, below is a list of the articles in this series.

The topic of this chapter is extremely important. For most applications, there is going to be a need to maintain persistent data that can be retrieved efficiently, and this is exactly what databases are made for.

The GitHub links for this chapter are: Browse, Zip, Diff.

Databases in Flask

As I'm sure you have heard already, Flask does not support databases natively. This is one of the many areas in which Flask is intentionally not opinionated, which is great, because you have the freedom to choose the database that best fits your application instead of being forced to adapt to one.

There are great choices for databases in Python, many of them with Flask extensions that make a better integration with the application. The databases can be separated into two big groups, those that follow the relational model, and those that do not. The latter group is often called NoSQL, indicating that they do not implement the popular relational query language SQL. While there are great database products in both groups, my opinion is that relational databases are a better match for applications that have structured data such as lists of users, blog posts, etc., while NoSQL databases tend to be better for data that has a less defined structure. This application, like most others, can be implemented using either type of database, but for the reasons stated above, I'm going to go with a relational database.

In Chapter 3 I showed you a first Flask extension. In this chapter I'm going to use two more. The first is Flask-SQLAlchemy, an extension that provides a Flask-friendly wrapper to the popular SQLAlchemy package, which is an Object Relational Mapper or ORM. ORMs allow applications to manage a database using high-level entities such as classes, objects and methods instead of tables and SQL. The job of the ORM is to translate the high-level operations into database commands.

The nice thing about SQLAlchemy is that it is an ORM not for one, but for many relational databases. SQLAlchemy supports a long list of database engines, including the popular MySQL, PostgreSQL and SQLite. This is extremely powerful, because you can do your development using a simple SQLite database that does not require a server, and then when the time comes to deploy the application on a production server you can choose a more robust MySQL or PostgreSQL server, without having to change your application.

To install Flask-SQLAlchemy in your virtual environment, make sure you have activated it first, and then run:

(venv) $ pip install flask-sqlalchemy

Database Migrations

Most database tutorials I've seen cover creation and use of a database, but do not adequately address the problem of making updates to an existing database as the application needs change or grow. This is hard because relational databases are centered around structured data, so when the structure changes the data that is already in the database needs to be migrated to the modified structure.

The second extension that I'm going to present in this chapter is Flask-Migrate, which is actually one created by yours truly. This extension is a Flask wrapper for Alembic, a database migration framework for SQLAlchemy. Working with database migrations adds a bit of work to get a database started, but that is a small price to pay for a robust way to make changes to your database in the future.

The installation process for Flask-Migrate is similar to other extensions you have seen:

(venv) $ pip install flask-migrate

Flask-SQLAlchemy Configuration

During development, I'm going to use a SQLite database. SQLite databases are the most convenient choice for developing small applications, sometimes even not so small ones, as each database is stored in a single file on disk and there is no need to run a database server like MySQL and PostgreSQL.

We have two new configuration items to add to the config file:

config.py: Flask-SQLAlchemy configuration

import os
basedir = os.path.abspath(os.path.dirname(__file__))

class Config(object):
    # ...
    SQLALCHEMY_DATABASE_URI = os.environ.get('DATABASE_URL') or \
        'sqlite:///' + os.path.join(basedir, 'app.db')
    SQLALCHEMY_TRACK_MODIFICATIONS = False

The Flask-SQLAlchemy extension takes the location of the application's database from the SQLALCHEMY_DATABASE_URI configuration variable. As you recall from Chapter 3, it is in general a good practice to set configuration from environment variables, and provide a fallback value when the environment does not define the variable. In this case I'm taking the database URL from the DATABASE_URL environment variable, and if that isn't defined, I'm configuring a database named app.db located in the main directory of the application, which is stored in the basedir variable.

The SQLALCHEMY_TRACK_MODIFICATIONS configuration option is set to False to disable a feature of Flask-SQLAlchemy that I do not need, which is to send a signal to the application every time a change is about to be made in the database.

The database is going to be represented in the application by the database instance. The database migration engine will also have an instance. These are objects that need to be created after the application, in the app/__init__.py file:

app/__init__.py: Flask-SQLAlchemy and Flask-Migrate initialization

from flask import Flask
from config import Config
from flask_sqlalchemy import SQLAlchemy
from flask_migrate import Migrate

app = Flask(__name__)
app.config.from_object(Config)
db = SQLAlchemy(app)
migrate = Migrate(app, db)

from app import routes, models

I have made three changes to the init script. First, I have added a db object that represents the database. Then I have added another object that represents the migration engine. Hopefully you see a pattern in how to work with Flask extensions. Most extensions are initialized as these two. Finally, I'm importing a new module called models at the bottom. This module will define the structure of the database.

Database Models

The data that will be stored in the database will be represented by a collection of classes, usually called database models. The ORM layer within SQLAlchemy will do the translations required to map objects created from these classes into rows in the proper database tables.

Let's start by creating a model that represents users. Using the WWW SQL Designer tool, I have made the following diagram to represent the data that we want to use in the users table:

users table

The id field is usually in all models, and is used as the primary key. Each user in the database will be assigned a unique id value, stored in this field. Primary keys are, in most cases, automatically assigned by the database, so I just need to provide the id field marked as a primary key.

The username, email and password_hash fields are defined as strings (or VARCHAR in database jargon), and their maximum lengths are specified so that the database can optimize space usage. While the username and email fields are self-explanatory, the password_hash fields deserves some attention. I want to make sure the application that I'm building adopts security best practices, and for that reason I will not be storing user passwords in the database. The problem with storing passwords is that if the database ever becomes compromised, the attackers will have access to the passwords, and that could be devastating for users. Instead of writing the passwords directly, I'm going to write password hashes, which greatly improve security. This is going to be the topic of another chapter, so don't worry about it too much for now.

So now that I know what I want for my users table, I can translate that into code in the new app/models.py module:

app/models.py: User database model

from app import db

class User(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    username = db.Column(db.String(64), index=True, unique=True)
    email = db.Column(db.String(120), index=True, unique=True)
    password_hash = db.Column(db.String(128))

    def __repr__(self):
        return '<User {}>'.format(self.username)

The User class created above inherits from db.Model, a base class for all models from Flask-SQLAlchemy. This class defines several fields as class variables. Fields are created as instances of the db.Column class, which takes the field type as an argument, plus other optional arguments that, for example, allow me to indicate which fields are unique and indexed, which is important so that database searches are efficient.

The __repr__ method tells Python how to print objects of this class, which is going to be useful for debugging. You can see the __repr__() method in action in the Python interpreter session below:

>>> from app.models import User
>>> u = User(username='susan', email='susan@example.com')
>>> u
<User susan>

Creating The Migration Repository

The model class created in the previous section defines the initial database structure (or schema) for this application. But as the application continues to grow, it is likely that I will need to make changes to that structure such as adding new things, and sometimes to modify or remove items. Alembic (the migration framework used by Flask-Migrate) will make these schema changes in a way that does not require the database to be recreated from scratch every time a change needs to be made.

To accomplish this seemingly difficult task, Alembic maintains a migration repository, which is a directory in which it stores its migration scripts. Each time a change is made to the database schema, a migration script is added to the repository with the details of the change. To apply the migrations to a database, these migration scripts are executed in the sequence they were created.

Flask-Migrate exposes its commands through the flask command. You have already seen flask run, which is a sub-command that is native to Flask. The flask db sub-command is added by Flask-Migrate to manage everything related to database migrations. So let's create the migration repository for microblog by running flask db init:

(venv) $ flask db init
  Creating directory /home/miguel/microblog/migrations ... done
  Creating directory /home/miguel/microblog/migrations/versions ... done
  Generating /home/miguel/microblog/migrations/alembic.ini ... done
  Generating /home/miguel/microblog/migrations/env.py ... done
  Generating /home/miguel/microblog/migrations/README ... done
  Generating /home/miguel/microblog/migrations/script.py.mako ... done
  Please edit configuration/connection/logging settings in
  '/home/miguel/microblog/migrations/alembic.ini' before proceeding.

Remember that the flask command relies on the FLASK_APP environment variable to know where the Flask application lives. For this application, you want to set FLASK_APP to the value microblog.py, as discussed in Chapter 1.

After you run this command, you will find a new migrations directory, with a few files and a versions sub-directory inside. All these files should be treated as part of your project from now on, and in particular, should be added to source control along with your application code.

The First Database Migration

With the migration repository in place, it is time to create the first database migration, which will include the users table that maps to the User database model. There are two ways to create a database migration: manually or automatically. To generate a migration automatically, Alembic compares the database schema as defined by the database models, against the actual database schema currently used in the database. It then populates the migration script with the changes necessary to make the database schema match the application models. In this case, since there is no previous database, the automatic migration will add the entire User model to the migration script. The flask db migrate sub-command generates these automatic migrations:

(venv) $ flask db migrate -m "users table"
INFO  [alembic.runtime.migration] Context impl SQLiteImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.autogenerate.compare] Detected added table 'user'
INFO  [alembic.autogenerate.compare] Detected added index 'ix_user_email' on '['email']'
INFO  [alembic.autogenerate.compare] Detected added index 'ix_user_username' on '['username']'
  Generating /home/miguel/microblog/migrations/versions/e517276bb1c2_users_table.py ... done

The output of the command gives you an idea of what Alembic included in the migration. The first two lines are informational and can usually be ignored. It then says that it found a user table and two indexes. Then it tells you where it wrote the migration script. The e517276bb1c2 code is an automatically generated unique code for the migration (it will be different for you). The comment given with the -m option is optional, it adds a short descriptive text to the migration.

The generated migration script is now part of your project, and needs to be incorporated to source control. You are welcome to inspect the script if you are curious to see how it looks. You will find that it has two functions called upgrade() and downgrade(). The upgrade() function applies the migration, and the downgrade() function removes it. This allows Alembic to migrate the database to any point in the history, even to older versions, by using the downgrade path.

The flask db migrate command does not make any changes to the database, it just generates the migration script. To apply the changes to the database, the flask db upgrade command must be used.

(venv) $ flask db upgrade
INFO  [alembic.runtime.migration] Context impl SQLiteImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade  -> e517276bb1c2, users table

Because this application uses SQLite, the upgrade command will detect that a database does not exist and will create it (you will notice a file named app.db is added after this command finishes, that is the SQLite database). When working with database servers such as MySQL and PostgreSQL, you have to create the database in the database server before running upgrade.

Note that Flask-SQLAlchemy uses a "snake case" naming convention for database tables by default. For the User model above, the corresponding table in the database will be named user. For a AddressAndPhone model class, the table would be named address_and_phone. If you prefer to choose your own table names, you can add an attribute named __tablename__ to the model class, set to the desired name as a string.

Database Upgrade and Downgrade Workflow

The application is in its infancy at this point, but it does not hurt to discuss what is going to be the database migration strategy going forward. Imagine that you have your application on your development machine, and also have a copy deployed to a production server that is online and in use.

Let's say that for the next release of your app you have to introduce a change to your models, for example a new table needs to be added. Without migrations you would need to figure out how to change the schema of your database, both in your development machine and then again in your server, and this could be a lot of work.

But with database migration support, after you modify the models in your application you generate a new migration script (flask db migrate), you probably review it to make sure the automatic generation did the right thing, and then apply the changes to your development database (flask db upgrade). You will add the migration script to source control and commit it.

When you are ready to release the new version of the application to your production server, all you need to do is grab the updated version of your application, which will include the new migration script, and run flask db upgrade. Alembic will detect that the production database is not updated to the latest revision of the schema, and run all the new migration scripts that were created after the previous release.

As I mentioned earlier, you also have a flask db downgrade command, which undoes the last migration. While you will be unlikely to need this option on a production system, you may find it very useful during development. You may have generated a migration script and applied it, only to find that the changes that you made are not exactly what you need. In this case, you can downgrade the database, delete the migration script, and then generate a new one to replace it.

Database Relationships

Relational databases are good at storing relations between data items. Consider the case of a user writing a blog post. The user will have a record in the users table, and the post will have a record in the posts table. The most efficient way to record who wrote a given post is to link the two related records.

Once a link between a user and a post is established, the database can answer queries about this link. The most trivial one is when you have a blog post and need to know what user wrote it. A more complex query is the reverse of this one. If you have a user, you may want to know all the posts that this user wrote. Flask-SQLAlchemy will help with both types of queries.

Let's expand the database to store blog posts to see relationships in action. Here is the schema for a new posts table:

posts table

The posts table will have the required id, the body of the post and a timestamp. But in addition to these expected fields, I'm adding a user_id field, which links the post to its author. You've seen that all users have a id primary key, which is unique. The way to link a blog post to the user that authored it is to add a reference to the user's id, and that is exactly what the user_id field is. This user_id field is called a foreign key. The database diagram above shows foreign keys as a link between the field and the id field of the table it refers to. This kind of relationship is called a one-to-many, because "one" user writes "many" posts.

The modified app/models.py is shown below:

app/models.py: Posts database table and relationship

from datetime import datetime
from app import db

class User(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    username = db.Column(db.String(64), index=True, unique=True)
    email = db.Column(db.String(120), index=True, unique=True)
    password_hash = db.Column(db.String(128))
    posts = db.relationship('Post', backref='author', lazy='dynamic')

    def __repr__(self):
        return '<User {}>'.format(self.username)

class Post(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    body = db.Column(db.String(140))
    timestamp = db.Column(db.DateTime, index=True, default=datetime.utcnow)
    user_id = db.Column(db.Integer, db.ForeignKey('user.id'))

    def __repr__(self):
        return '<Post {}>'.format(self.body)

The new Post class will represent blog posts written by users. The timestamp field is going to be indexed, which is useful if you want to retrieve posts in chronological order. I have also added a default argument, and passed the datetime.utcnow function. When you pass a function as a default, SQLAlchemy will set the field to the value of calling that function (note that I did not include the () after utcnow, so I'm passing the function itself, and not the result of calling it). In general, you will want to work with UTC dates and times in a server application. This ensures that you are using uniform timestamps regardless of where the users are located. These timestamps will be converted to the user's local time when they are displayed.

The user_id field was initialized as a foreign key to user.id, which means that it references an id value from the users table. In this reference the user part is the name of the database table for the model. It is an unfortunate inconsistency that in some instances such as in a db.relationship() call, the model is referenced by the model class, which typically starts with an uppercase character, while in other cases such as this db.ForeignKey() declaration, a model is given by its database table name, for which SQLAlchemy automatically uses lowercase characters and, for multi-word model names, snake case.

The User class has a new posts field, that is initialized with db.relationship. This is not an actual database field, but a high-level view of the relationship between users and posts, and for that reason it isn't in the database diagram. For a one-to-many relationship, a db.relationship field is normally defined on the "one" side, and is used as a convenient way to get access to the "many". So for example, if I have a user stored in u, the expression u.posts will run a database query that returns all the posts written by that user. The first argument to db.relationship is the model class that represents the "many" side of the relationship. This argument can be provided as a string with the class name if the model is defined later in the module. The backref argument defines the name of a field that will be added to the objects of the "many" class that points back at the "one" object. This will add a post.author expression that will return the user given a post. The lazy argument defines how the database query for the relationship will be issued, which is something that I will discuss later. Don't worry if these details don't make much sense just yet, I'll show you examples of this at the end of this article.

Since I have updates to the application models, a new database migration needs to be generated:

(venv) $ flask db migrate -m "posts table"
INFO  [alembic.runtime.migration] Context impl SQLiteImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.autogenerate.compare] Detected added table 'post'
INFO  [alembic.autogenerate.compare] Detected added index 'ix_post_timestamp' on
'['timestamp']'
  Generating /home/miguel/microblog/migrations/versions/780739b227a7_posts_table.py ... done

And the migration needs to be applied to the database:

(venv) $ flask db upgrade
INFO  [alembic.runtime.migration] Context impl SQLiteImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
INFO  [alembic.runtime.migration] Running upgrade e517276bb1c2 -> 780739b227a7, posts table

If you are storing your project in source control, also remember to add the new migration script to it.

Playing with the Database

I have made you suffer through a long process to define the database, but I haven't shown you how everything works yet. Since the application does not have any database logic yet, let's play with the database in the Python interpreter to familiarize with it. Fire up Python by running python on your terminal. Make sure your virtual environment is activated before you start the interpreter.

Once in the Python prompt, let's import the database instance and the models:

>>> from app import db
>>> from app.models import User, Post

Start by creating a new user:

>>> u = User(username='john', email='john@example.com')
>>> db.session.add(u)
>>> db.session.commit()

Changes to a database are done in the context of a database session, which can be accessed as db.session. Multiple changes can be accumulated in a session and once all the changes have been registered you can issue a single db.session.commit(), which writes all the changes atomically. If at any time while working on a session there is an error, a call to db.session.rollback() will abort the session and remove any changes stored in it. The important thing to remember is that changes are only written to the database when a commit is issued with db.session.commit(). Sessions guarantee that the database will never be left in an inconsistent state.

Let's add another user:

>>> u = User(username='susan', email='susan@example.com')
>>> db.session.add(u)
>>> db.session.commit()

The database can answer a query that returns all the users:

>>> users = User.query.all()
>>> users
[<User john>, <User susan>]
>>> for u in users:
...     print(u.id, u.username)
...
1 john
2 susan

All models have a query attribute that is the entry point to run database queries. The most basic query is that one that returns all elements of that class, which is appropriately named all(). Note that the id fields were automatically set to 1 and 2 when those users were added.

Here is another way to do queries. If you know the id of a user, you can retrieve that user as follows:

>>> u = User.query.get(1)
>>> u
<User john>

Now let's add a blog post:

>>> u = User.query.get(1)
>>> p = Post(body='my first post!', author=u)
>>> db.session.add(p)
>>> db.session.commit()

I did not need to set a value for the timestamp field because that field has a default, which you can see in the model definition. And what about the user_id field? Recall that the db.relationship that I created in the User class adds a posts attribute to users, and also a author attribute to posts. I assign an author to a post using the author virtual field instead of having to deal with user IDs. SQLAlchemy is great in that respect, as it provides a high-level abstraction over relationships and foreign keys.

To complete this session, let's look at a few more database queries:

>>> # get all posts written by a user
>>> u = User.query.get(1)
>>> u
<User john>
>>> posts = u.posts.all()
>>> posts
[<Post my first post!>]

>>> # same, but with a user that has no posts
>>> u = User.query.get(2)
>>> u
<User susan>
>>> u.posts.all()
[]

>>> # print post author and body for all posts
>>> posts = Post.query.all()
>>> for p in posts:
...     print(p.id, p.author.username, p.body)
...
1 john my first post!

# get all users in reverse alphabetical order
>>> User.query.order_by(User.username.desc()).all()
[<User susan>, <User john>]

The Flask-SQLAlchemy documentation is the best place to learn about the many options that are available to query the database.

To complete this section, let's erase the test users and posts created above, so that the database is clean and ready for the next chapter:

>>> users = User.query.all()
>>> for u in users:
...     db.session.delete(u)
...
>>> posts = Post.query.all()
>>> for p in posts:
...     db.session.delete(p)
...
>>> db.session.commit()

Shell Context

Remember what you did at the start of the previous section, right after starting a Python interpreter? The first thing you did was to run some imports:

>>> from app import db
>>> from app.models import User, Post

While you work on your application, you will need to test things out in a Python shell very often, so having to repeat the above imports every time is going to get tedious, so this is a good time to address this problem.

The flask shell command is another very useful tool in the flask umbrella of commands. The shell command is the second "core" command implemented by Flask, after run. The purpose of this command is to start a Python interpreter in the context of the application. What does that mean? See the following example:

(venv) $ python
>>> app
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
NameError: name 'app' is not defined
>>>

(venv) $ flask shell
>>> app
<Flask 'app'>

With a regular interpreter session, the app symbol is not known unless it is explicitly imported, but when using flask shell, the command pre-imports the application instance. The nice thing about flask shell is not that it pre-imports app, but that you can configure a "shell context", which is a list of other symbols to pre-import.

The following function in microblog.py creates a shell context that adds the database instance and models to the shell session:

from app import app, db
from app.models import User, Post

@app.shell_context_processor
def make_shell_context():
    return {'db': db, 'User': User, 'Post': Post}

The app.shell_context_processor decorator registers the function as a shell context function. When the flask shell command runs, it will invoke this function and register the items returned by it in the shell session. The reason the function returns a dictionary and not a list is that for each item you have to also provide a name under which it will be referenced in the shell, which is given by the dictionary keys.

After you add the shell context processor function you can work with database entities without having to import them:

(venv) $ flask shell
>>> db
<SQLAlchemy engine=sqlite:////Users/migu7781/Documents/dev/flask/microblog2/app.db>
>>> User
<class 'app.models.User'>
>>> Post
<class 'app.models.Post'>

If you try the above and get NameError exceptions when you try to access db, User and Post, then the make_shell_context() function is not being registered with Flask. The most likely cause of this is that you have not set FLASK_APP=microblog.py in the environment. In that case, go back to Chapter 1 and review how to set the FLASK_APP environment variable. If you often forget to set this variable when you open new terminal windows, you may consider adding a .flaskenv file to your project, as described at the end of that chapter.

821 comments

  • #276 Miguel Grinberg said 2018-10-13T19:23:32Z

    @Hamza: You forgot to add the "author" relationship. This is specified in the backref argument of the posts relationship in the User model.

  • #277 David Isebarn said 2018-10-16T11:47:45Z

    Excellent tutorial Miguel!

  • #278 Jose Cha said 2018-10-23T12:19:38Z

    First of all, I want to say a big thank you for such a well constructed, jam packed tutorial covering so many diverse topics - I really enjoy and appreciate your content Miguel!

    Now to my question, this may be an implementation detail that may be not so important, but I was wondering why are the the variables in the model (e.g. User.id) initialised as class variables and not as instance variables? Could we achieve the same thing through the init method and passing the arguments?

    Thank you!

  • #279 Miguel Grinberg said 2018-10-23T17:59:02Z

    @Jose: this style of defining variable is common in Python. In this case it is SQLAlchemy that uses this format. The reason why the columns are class variables is that these are not attributes of a given row of the table, they are the definitions for the columns, which apply to all the rows. If you define them as instance variables then you would not be able to refer to a column using User.id, since that would not exist.

  • #280 Bob said 2018-10-30T03:13:34Z

    Dear Miguel,

    Thank you for the tutorial. Up until now I was able to resolve all issues on my own. However this time it feels like I got stuck. I am getting and error saying "Instance of SQLAlchemy has no column member" or "Instance of SQLAlchemy has no string member". I am wondering if this is a typical issue faced by beginners.

  • #281 Oscar J. said 2018-10-30T13:44:35Z

    Hello Miguel,

    I was wondering about application admin. I know that flask can handle several databases simultaneously through binds (btw, it would be awesome if you write about this topic), so I was thinking about creating a new db with an unique table 'roles' that can be used to specify which of user have admin powers (i.e. remove posts from other users). I want to somehow password protect this second database so regular users cannot add themselves to this table. Any experience/suggestion about it?

    Cheers!

  • #282 Miguel Grinberg said 2018-10-31T08:08:57Z

    @Bob: I don't think I've seen these errors, so it's probably not a common problem. You need to provide more details for me to help you. Which step gives you this error? If you get a stack trace, then showing ill of t might help too.

  • #283 Miguel Grinberg said 2018-10-31T08:14:27Z

    @Oscar: what is the purpose of using a separate database for your roles? Wouldn't a roles table in the same database be the same? Users of your web application do not have direct access to the database, so they would not be able to change their role, they have to pass through your application, so as long as your application does not allow them to fix their roles you should be fine.

  • #284 Ngoc Anh Do said 2018-11-03T03:19:07Z

    Hello Miguel! I have a problem wish you can help me soon! I want to create two table Teacher and Student like below, but when i do migrate, it result come with an error. Herre is my code: https://pastebin.com/N11YQ79b Error : https://pastebin.com/TFBQpyEf

  • #285 Miguel Grinberg said 2018-11-03T10:36:01Z

    @Ngoc: I think you have a typo in the database url, slite instead of sqlite.

  • #286 Benjamin said 2018-11-03T12:33:44Z

    Hi Miguel, Very good tutorial just a suggestion: As some others everything was fine until I got to the migrations part and was not getting the database created after running that command all that showed was: ''' INFO [alembic.runtime.migration] Context impl SQLiteImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.env] No changes in schema detected. ''' finally started comparing my code to the code on github and turns out it was because microblog.py was: from app import app

    and needed to change it to: from app import app, db from app.models import User, Post

    which you covered in the shell context session but for those following along and typing in the code rather than cloning from github you might want to add a little code section before the Creating The Migration Repository section.

    Thanks, Benjamin

  • #287 Miguel Grinberg said 2018-11-03T20:20:56Z

    @Benjamin: you are mistaken, you do not need to import anything else in microblog.py to get Flask-Migrate to work. My guess is that you missed the "import models" at the bottom of init,py instead.

  • #288 Benjamin said 2018-11-04T00:09:27Z

    @Miguel Yep you're right, that's exactly what it was.

    Thanks, Benjamin

  • #289 AR said 2018-11-06T17:56:18Z

    Hi Miguel,

    In case of db entries, when i am creating multiple row at a time, this is how i am doing is currently -

    pd = any list for loop for i in pd: new_row = AnyModel(id =value, name = value) db.session.add(new_row) db.session.commit() gc.collect()

    this method commits db for each iteration of loop which is not very efficient. What do you suggest is the best way to handle this?

    p.s. - in your opinion, is gc.collect() useful after each db commit?

  • #290 Miguel Grinberg said 2018-11-06T18:16:16Z

    @AR: move the db.session.commit() outside the loop, so that you issue a single commit with all the changes. The gc.collect() is likely doing nothing useful there. Why did you put it there?

  • #291 AR said 2018-11-06T18:34:52Z

    gc.collect() for garbage collection, to keep memory wastage down. In case, opening and closing a connection keeps up some memory.

    Can i use the same variable name 'new_row' for each iteration, or do i need to change it or put an iterator in that.?

  • #292 Miguel Grinberg said 2018-11-06T23:13:10Z

    @AR: Yes, I know what gc.collect() does, but in most cases there is no extra benefit in calling this function explicitly, since Python itself calls it while your script runs. You can use the same variable, that's not a problem in this case.

  • #293 Fernando said 2018-11-10T22:37:05Z

    Wonderful work Miguel! Thanks for this great Flask guidance.

    My two cents: NOSQL stands for 'Not Only SQL', it does not mean 'SQL not implemented'. Actually you can find SQL (or alike) layers on top of NOSQL databases.

    Cheers, Fernando

  • #294 Ola said 2018-11-16T14:17:58Z

    Hi Miguel,

    Thank you for your tutorial! I seem to run into an issue when running the app, after adding the database support. I get "cannot import name 'db'" in models.py, db is defined as an object in init.py but it seems this is not recognized. I've looked through my code of course but can find nothing which differs from yours and I've initiated the database and migration repository without errors. Do you have some idea that it is that I've missed here?

  • #295 Miguel Grinberg said 2018-11-16T16:47:16Z

    @Ola: I need to see the complete stack trace of the error to know what's going on.

  • #296 Seth said 2018-11-19T23:45:12Z

    Running FLASK_APP=microblog.py flask shell does 'fix' the issue:

    (venv) (xenial)gilmore@localhost:~/microblog$ FLASK_APP=microblog.py (venv) (xenial)gilmore@localhost:~/microblog$ flask shell Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56) [GCC 7.2.0] on linux App: app [production] Instance: /home/gilmore/microblog/instance

    db Traceback (most recent call last): File "", line 1, in NameError: name 'db' is not defined quit() (venv) (xenial)gilmore@localhost:~/microblog$ FLASK_APP=microblog.py flask shell Python 3.6.5 |Anaconda, Inc.| (default, Apr 29 2018, 16:14:56) [GCC 7.2.0] on linux App: app [production] Instance: /home/gilmore/microblog/instance db

    This can be found from the stackoverflow post (https://stackoverflow.com/questions/52564254/nameerror-name-user-is-not-defined-when-flask-shell)

    When I adjust my .flaskenv.py file to read: FLASK_APP=microblog.py flask shell the original issue persists.

  • #297 Miguel Grinberg said 2018-11-20T10:16:19Z

    @seth: variables that you set in the shell prompt need to be "exported" for them to be passed on to processes that you run from the shell. Use "export FLASK_APP=...". The .flaskenv problem could be that you don't have python-dotenv installed. This package is required for Flask to import variables from the .flaskenv and .env files.

  • #298 Phil said 2018-11-20T11:19:40Z

    Hello, Flask shell, I've tried everything, file/folder names, envir variables, can't get it to work. Keep getting namespace errors on db/User/Post, but app works. How do I troubleshoot this? win10-64 VSCode cmdr

    Excellent work btw <; Thanks, Phil

  • #299 Miguel Grinberg said 2018-11-20T12:47:38Z

    @Phil: you are not showing me what you tried, so I can't really tell you. Clearly you've made a mistake somewhere. Maybe write a detailed question in Stack Overflow with everything that you tried and then I might be able to help.

  • #300 Phil Curtis said 2018-11-22T04:13:24Z

    Hi Miguel, Thank you for the advice. I was having a great session the other day that ended in the frustrations with the Shell Context failure. This is our life! I'm working on my home PC and have pushed on to the user login lesson but I will round back to the shell and document my steps, code, and question better. Thank you for granting me the space to vent. Phil

Leave a Comment