PL λαβ

Lab 1.2a[김예령]: 구문 분석기 본문

kos

Lab 1.2a[김예령]: 구문 분석기

IC4RUS 2021. 4. 8. 20:43

 

LAB 1.2 목표

  • 여러줄의 코드를 입력받아 구문을 분석한다.
  • 어휘 분석을 통해 input을 token으로 변환한 뒤 구문 분석으로 token list를 구문 트리로 변환한다.
  • Input Class, Token Class(including nextToken method), parse function을 구현한다.
  • 최종 완성한 구문 분석기로 만든 구문 트리를 출력한다.

구현

lexer - OK

parser - T.T

from enum import Enum

class Error(Enum):
    Error_OK = 0
    Error_Syntax = 1

class Type(Enum):
    EOF = 0
    OP = 1
    CP = 2
    DEF = 3
    LAM = 4
    ID = 5
    PLUS_SYM = 6
    MINUS_SYM = 7
    G_SYM = 8
    L_SYM = 9
    IF = 10

class Token:
    def __init__(self, type=0, value=0):
        self.type = type
        self.value = value
    def __str__(self):
        if (self.type == Type.ID):
            return f"Token [{self.type}, Value: {self.value}]"
        else:
            return f"Token [{self.type}]"
    #def NextToken():
        # giving next token. how?

def Lexer(lists):   # Input: Input List / Return: Token List
    lexList = []

    for i in range(len(lists)):
        for j in range(len(lists[i])):
            LA = lists[i][j]
            if(LA == '('):
                lexList.append(Token(Type.OP))
            elif(LA == ')'):
                lexList.append(Token(Type.CP))
            elif(LA == 'define'):
                lexList.append(Token(Type.DEF))
            elif(LA == 'lambda'):
                lexList.append(Token(Type.LAM))
            elif(LA == 'if'):
                lexList.append(Token(Type.IF))
            elif(LA == '+'):
                lexList.append(Token(Type.PLUS_SYM))
            elif(LA == '-'):
                lexList.append(Token(Type.MINUS_SYM))
            elif(LA == '>'):
                lexList.append(Token(Type.G_SYM))
            elif(LA == '<'):
                lexList.append(Token(Type.L_SYM))
            else:
                lexList.append(Token(Type.ID, value=LA))

    return lexList

# def Parser(lists):  # Input: Token List / return: Parsed Token List
    

if __name__ == "__main__":
    inputList = []

    while True:
        line = input()
        if line:
            inputList.append(line.replace('(', ' ( ').replace(')', ' ) ').split())
        else:
            break

    results = Lexer(inputList)
    print("========Lexing Result========")
    for result in results:
        print(result)

    # Parser(results)
    print("========Parsing Result========")

 

 

Reference

norvig.com/lispy.html

 

(How to Write a (Lisp) Interpreter (in Python))

(How to Write a (Lisp) Interpreter (in Python)) This page has two purposes: to describe how to implement computer language interpreters in general, and in particular to build an interpreter for most of the Scheme dialect of Lisp using Python 3 as the imple

norvig.com

lwh.jp/lisp/parser.html

 

Chapter 3: Parser

Parser The next stage in our project is parsing: taking a line of text from the user (or elsewhere), and creating the data objects it represents. Naturally the user might type something which does not represent an object according to our definitions, in wh

lwh.jp

iamwilhelm.github.io/bnf-examples/lisp

 

https://iamwilhelm.github.io/bnf-examples/lisp

Lisp BNF Lisp is one of the oldest and yet most powerful programming languages. It’s said by those just learning lisp that the enlightenment it gives you makes you a much better programmer overall, even if you don’t use it day to day. Most of the featu

iamwilhelm.github.io

 

'kos' 카테고리의 다른 글

Lab 1.2a[최준혁]: 구문 분석기  (0) 2021.04.08
Lab 1.2a[박인철]: 구문 분석기  (0) 2021.04.08
Lab 1.2: 구문 분석기  (0) 2021.04.08
Lab 1.1[박인철]: Data 만들기  (0) 2021.04.06
Lab 1.1[김예령]: Data 만들기  (0) 2021.04.06
Comments