ng2-semantic-ui search/select with ajax lookup
The Issue
When using ng2-semantic-ui select or search components, the lookup function cannot find the service and always returns 0 results.
When using ng2-semantic-ui select or search components, the lookup function cannot find the service and always returns 0 results.
The steps are taken from http://docs.aws.amazon.com/AmazonECR/latest/userguide/retag-aws-cli.html. The issue I ran into is the wrong regions are selected. This is fixed with the –region param.
This is useful for adding tags to your :latest image. For example, QA looks at :latest, but my prod service looks for image :prod. I’ll also add a tag for the prod deployment date.
MY_MANIFEST=$(aws ecr batch-get-image --repository-name REPO_NAME --image-ids imageTag=latest --region us-west-2 --query images[].imageManifest --output text)
aws ecr put-image --repository-name REPO_NAME --image-tag NEW_TAG_NAME --image-manifest "$MY_MANIFEST" --region us-west-2
The Angular 2 router will only update the target component with the url and params. Each component is initialized only when it comes into view. If it already visible when the url changes, it will not be updated by the router. For google, angular 2 components not updating when url changes.
If you’re used to using Angular 1 with ui-router ($state and $stateParams services), using Angular 2 can be frustrating. Instead of using ui-router, we will try to stick with the barebones Angular 2 setup. Angular 2 has its own routing component that has been a huge improvement over the Angular 1 routing component. However, it does not follow the same design as ui-router, which was much easier in my opinion.
PhpStorm is hiding the node_modules directory inside the IDE. Since I was stuck like this for a few weeks, I’ll post it here.
denied: Your Authorization Token has expired. Please run 'aws ecr get-login' to fetch a new one.
This article shows how to setup a docker image using node and load your config info for each environment from IAM protected s3. The node base image is stripped down, so there’s a few modifications to the AWS example scripts that need to happen.
Installing awscli into a docker container running node:latest.
building '_yaml' extension
creating build/temp.linux-x86_64-3.4/ext
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -fPIC -I/usr/include/python3.4m -c ext/_yaml.c -o build/temp.linux-x86_64-3.4/ext/_yaml.o
ext/_yaml.c:4:20: fatal error: Python.h: No such file or directory
#include "Python.h"
^
compilation terminated.
error: command 'x86_64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
Command "/usr/bin/python3.4 -u -c "import setuptools, tokenize;__file__='/tmp/pip-build-wd2m_by7/pyyaml/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-ulirskuk-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-wd2m_by7/pyyaml/
I’ve seen many methods to have mongoose reconnect to mongo after an error. This is a huge problem, especially if the network drops for even a second. In our app, mongoose didn’t even know the connection had dropped, and our application would not be able to reach the database. To fix this, we had to force mongoose to reconnect on it’s own.
Processing large amounts of data in MongoDB and Node.js will probably lock your single threaded application. This is just my reference for when I need to process a large amount of documents using collection.initializeUnorderedBulkOp.