High quality, learner-focused IT workshops on: Artificial Intelligence,
Natural Language Processing, Blockchain, Machine Learning, Big Data and more.
Corporate and community trainings adjusted to your level
of experience - from rookie to advanced.
Select the best training for your individual
or organisational needs.
Workshops we offer:
WHY CODETE WORKSHOPS?
- , 2020 Kraków
Image recognition with Deep Learning
The training covers an implementation of an end to end deep learning project for image recognition using CNN-based architectures, including data preparation and explainability of such systems. During the classes, we are going to cover the theoretical background of Convolutional Neural Networks, discuss the state-of-the-art architectures, train a network from scratch and finally apply transfer lear...
- , 2020 Kraków
Advanced Spring Boot
Advanced Spring Boot. Duration: 2 working days for a maximum of 12 participants...
Gain knowledge and new skills with your friends for less.
CTO Codete, O'Reilly Trainer, Elsevier Reviewer
Obtained a PhD degree in Computer Science in 2015 at the Jagiellonian University in Cracow. CTO and founder of Codete. Leading and mentoring teams of Codete. Working with Fortune500 companies on data science projects. Built a research lab that is working on machine learning methods and big data solutions in Codete. Give speeches and trainings in German and English in data science with a focus on applied machine learning. Currently involved in trainings at O’Reilly.
Data Science Lead
Kacper Łukawski is Data Engineer and Tech Lead at Codete. Currently involved in Big Data projects and internal research in the field of Machine Learning. An enthusiast of applying data science in various sectors.
Technology Consulting Manager
Paweł Dyrek is a Project Manager with multiple international projects completed over last years. An experienced PHP Symfony developer and big enthusiast of Agile methodologies.
Michał Marciniec is a Tech Lead at Codete. He is a Java stereotype breaker and an eager promoter of fresh approach to Java programming. Currently focused on web backend development with Java.
They trusted us
I think Karol understood how to build chatbots well, and came well-prepared for the session. There was a lot of good information at an overview level. The source code in the Jupyter notebooks was also nice, illustrated the different topics that Karol discussed, and provides good examples of how to build different types of bots.
I learned about the basics of Machine Learning in general. That’s really amazing. I was surprised when I saw the algorithms in action. After these workshops I’m managed to search Internet for further reading. Something that was really hard to start without, now I have tools and general idea of what I need. Practical exercises were really helpful to find out which tool best suits my expectations. Trainer was extremely helpful. Anytime we struggled with some issues, he was immediately next to us and helped us solve the problem.
It was a useful session to pickup some tips on preparing data for machine learning. Gathered quite a bit of stuff for further exploration and reading. Being a novice at this area, these sessions help to point out what the gaps in our knowledge are and what the state of art is.
The stated objective of this session was "Karol walks you through implementing a bot of each type" e.g., rule-, retrieval-, and generative-based customer support chatbots. Perhaps it's just me, but I continue to struggle to find Jupyter-based courses a very good learning experience, particular if introducing a subject area. Maybe the course was meant for someone who already had implemented a bot, and was looking for additional intermediate information on next steps. But Jupyter doesn't seem to really provide a good basis for a tutorial, where one goes step-by-step into building/programming. Because of all this, I didn't feel that the session really walked through implementing a bot of each type. More that it showed through the code (didn't really walk through the code, even) for bots of each type that were already implemented. Perhaps a subtle but big distinction. Also, as one session attendee commented, it would have been nice if a requirements.txt file was provided (ahead of time), so that those of us not using Docker or Anaconda could set things up in advance of the session (I use pipenv).